00:00:00.000 Started by upstream project "autotest-per-patch" build number 132807 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.092 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.093 The recommended git tool is: git 00:00:00.093 using credential 00000000-0000-0000-0000-000000000002 00:00:00.095 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.138 Fetching changes from the remote Git repository 00:00:00.141 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.179 Using shallow fetch with depth 1 00:00:00.179 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.179 > git --version # timeout=10 00:00:00.216 > git --version # 'git version 2.39.2' 00:00:00.216 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.247 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.247 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.600 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.610 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.621 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.621 > git config core.sparsecheckout # timeout=10 00:00:07.632 > git read-tree -mu HEAD # timeout=10 00:00:07.646 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.666 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.666 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.751 [Pipeline] Start of Pipeline 00:00:07.761 [Pipeline] library 00:00:07.762 Loading library shm_lib@master 00:00:07.762 Library shm_lib@master is cached. Copying from home. 00:00:07.777 [Pipeline] node 00:00:07.787 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:07.789 [Pipeline] { 00:00:07.795 [Pipeline] catchError 00:00:07.796 [Pipeline] { 00:00:07.805 [Pipeline] wrap 00:00:07.810 [Pipeline] { 00:00:07.815 [Pipeline] stage 00:00:07.816 [Pipeline] { (Prologue) 00:00:08.072 [Pipeline] sh 00:00:08.356 + logger -p user.info -t JENKINS-CI 00:00:08.371 [Pipeline] echo 00:00:08.373 Node: WFP21 00:00:08.380 [Pipeline] sh 00:00:08.679 [Pipeline] setCustomBuildProperty 00:00:08.691 [Pipeline] echo 00:00:08.693 Cleanup processes 00:00:08.698 [Pipeline] sh 00:00:08.983 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.983 2076808 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.997 [Pipeline] sh 00:00:09.282 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:09.282 ++ grep -v 'sudo pgrep' 00:00:09.282 ++ awk '{print $1}' 00:00:09.282 + sudo kill -9 00:00:09.282 + true 00:00:09.297 [Pipeline] cleanWs 00:00:09.306 [WS-CLEANUP] Deleting project workspace... 00:00:09.306 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.312 [WS-CLEANUP] done 00:00:09.317 [Pipeline] setCustomBuildProperty 00:00:09.330 [Pipeline] sh 00:00:09.609 + sudo git config --global --replace-all safe.directory '*' 00:00:09.718 [Pipeline] httpRequest 00:00:10.400 [Pipeline] echo 00:00:10.402 Sorcerer 10.211.164.112 is alive 00:00:10.410 [Pipeline] retry 00:00:10.412 [Pipeline] { 00:00:10.424 [Pipeline] httpRequest 00:00:10.428 HttpMethod: GET 00:00:10.428 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.429 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.431 Response Code: HTTP/1.1 200 OK 00:00:10.432 Success: Status code 200 is in the accepted range: 200,404 00:00:10.432 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.589 [Pipeline] } 00:00:11.605 [Pipeline] // retry 00:00:11.612 [Pipeline] sh 00:00:11.893 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.911 [Pipeline] httpRequest 00:00:12.508 [Pipeline] echo 00:00:12.509 Sorcerer 10.211.164.112 is alive 00:00:12.518 [Pipeline] retry 00:00:12.519 [Pipeline] { 00:00:12.532 [Pipeline] httpRequest 00:00:12.536 HttpMethod: GET 00:00:12.537 URL: http://10.211.164.112/packages/spdk_2e1d23f4b70ea8940db7624b3bb974a4a8658ec7.tar.gz 00:00:12.538 Sending request to url: http://10.211.164.112/packages/spdk_2e1d23f4b70ea8940db7624b3bb974a4a8658ec7.tar.gz 00:00:12.558 Response Code: HTTP/1.1 200 OK 00:00:12.559 Success: Status code 200 is in the accepted range: 200,404 00:00:12.559 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_2e1d23f4b70ea8940db7624b3bb974a4a8658ec7.tar.gz 00:01:12.972 [Pipeline] } 00:01:12.990 [Pipeline] // retry 00:01:12.998 [Pipeline] sh 00:01:13.286 + tar --no-same-owner -xf spdk_2e1d23f4b70ea8940db7624b3bb974a4a8658ec7.tar.gz 00:01:15.846 [Pipeline] sh 00:01:16.126 + git -C spdk log --oneline -n5 00:01:16.126 2e1d23f4b fuse_dispatcher: make header internal 00:01:16.126 3318278a6 vhost: check if vsession exists before remove scsi vdev 00:01:16.126 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:01:16.126 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:01:16.126 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:01:16.137 [Pipeline] } 00:01:16.151 [Pipeline] // stage 00:01:16.161 [Pipeline] stage 00:01:16.163 [Pipeline] { (Prepare) 00:01:16.181 [Pipeline] writeFile 00:01:16.197 [Pipeline] sh 00:01:16.482 + logger -p user.info -t JENKINS-CI 00:01:16.496 [Pipeline] sh 00:01:16.782 + logger -p user.info -t JENKINS-CI 00:01:16.796 [Pipeline] sh 00:01:17.083 + cat autorun-spdk.conf 00:01:17.083 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.083 SPDK_TEST_NVMF=1 00:01:17.083 SPDK_TEST_NVME_CLI=1 00:01:17.083 SPDK_TEST_NVMF_NICS=mlx5 00:01:17.083 SPDK_RUN_UBSAN=1 00:01:17.083 NET_TYPE=phy 00:01:17.091 RUN_NIGHTLY=0 00:01:17.095 [Pipeline] readFile 00:01:17.118 [Pipeline] withEnv 00:01:17.120 [Pipeline] { 00:01:17.133 [Pipeline] sh 00:01:17.417 + set -ex 00:01:17.417 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:17.417 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:17.417 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.417 ++ SPDK_TEST_NVMF=1 00:01:17.417 ++ SPDK_TEST_NVME_CLI=1 00:01:17.417 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:17.417 ++ SPDK_RUN_UBSAN=1 00:01:17.418 ++ NET_TYPE=phy 00:01:17.418 ++ RUN_NIGHTLY=0 00:01:17.418 + case $SPDK_TEST_NVMF_NICS in 00:01:17.418 + DRIVERS=mlx5_ib 00:01:17.418 + [[ -n mlx5_ib ]] 00:01:17.418 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:17.418 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:24.005 rmmod: ERROR: Module irdma is not currently loaded 00:01:24.005 rmmod: ERROR: Module i40iw is not currently loaded 00:01:24.005 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:24.005 + true 00:01:24.005 + for D in $DRIVERS 00:01:24.005 + sudo modprobe mlx5_ib 00:01:24.005 + exit 0 00:01:24.014 [Pipeline] } 00:01:24.030 [Pipeline] // withEnv 00:01:24.037 [Pipeline] } 00:01:24.052 [Pipeline] // stage 00:01:24.061 [Pipeline] catchError 00:01:24.063 [Pipeline] { 00:01:24.077 [Pipeline] timeout 00:01:24.077 Timeout set to expire in 1 hr 0 min 00:01:24.079 [Pipeline] { 00:01:24.093 [Pipeline] stage 00:01:24.095 [Pipeline] { (Tests) 00:01:24.109 [Pipeline] sh 00:01:24.397 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:24.397 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:24.397 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:24.397 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:24.397 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:24.397 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:24.397 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:24.397 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:24.397 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:24.397 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:24.397 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:24.397 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:24.397 + source /etc/os-release 00:01:24.397 ++ NAME='Fedora Linux' 00:01:24.397 ++ VERSION='39 (Cloud Edition)' 00:01:24.397 ++ ID=fedora 00:01:24.397 ++ VERSION_ID=39 00:01:24.397 ++ VERSION_CODENAME= 00:01:24.397 ++ PLATFORM_ID=platform:f39 00:01:24.397 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:24.397 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:24.397 ++ LOGO=fedora-logo-icon 00:01:24.397 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:24.397 ++ HOME_URL=https://fedoraproject.org/ 00:01:24.397 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:24.397 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:24.397 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:24.397 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:24.397 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:24.397 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:24.397 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:24.397 ++ SUPPORT_END=2024-11-12 00:01:24.397 ++ VARIANT='Cloud Edition' 00:01:24.397 ++ VARIANT_ID=cloud 00:01:24.397 + uname -a 00:01:24.397 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:24.397 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:27.693 Hugepages 00:01:27.693 node hugesize free / total 00:01:27.693 node0 1048576kB 0 / 0 00:01:27.693 node0 2048kB 0 / 0 00:01:27.693 node1 1048576kB 0 / 0 00:01:27.693 node1 2048kB 0 / 0 00:01:27.693 00:01:27.693 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:27.693 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:27.693 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:27.694 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:27.694 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:27.694 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:27.694 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:27.694 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:27.694 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:27.694 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:27.694 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:27.694 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:27.694 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:27.694 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:27.694 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:27.694 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:27.694 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:27.694 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:27.694 + rm -f /tmp/spdk-ld-path 00:01:27.694 + source autorun-spdk.conf 00:01:27.694 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.694 ++ SPDK_TEST_NVMF=1 00:01:27.694 ++ SPDK_TEST_NVME_CLI=1 00:01:27.694 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:27.694 ++ SPDK_RUN_UBSAN=1 00:01:27.694 ++ NET_TYPE=phy 00:01:27.694 ++ RUN_NIGHTLY=0 00:01:27.694 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:27.694 + [[ -n '' ]] 00:01:27.694 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:27.694 + for M in /var/spdk/build-*-manifest.txt 00:01:27.694 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:27.694 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:27.694 + for M in /var/spdk/build-*-manifest.txt 00:01:27.694 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:27.694 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:27.694 + for M in /var/spdk/build-*-manifest.txt 00:01:27.694 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:27.694 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:27.694 ++ uname 00:01:27.694 + [[ Linux == \L\i\n\u\x ]] 00:01:27.694 + sudo dmesg -T 00:01:27.694 + sudo dmesg --clear 00:01:27.694 + dmesg_pid=2077735 00:01:27.694 + [[ Fedora Linux == FreeBSD ]] 00:01:27.694 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.694 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.694 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:27.694 + [[ -x /usr/src/fio-static/fio ]] 00:01:27.694 + export FIO_BIN=/usr/src/fio-static/fio 00:01:27.694 + FIO_BIN=/usr/src/fio-static/fio 00:01:27.694 + sudo dmesg -Tw 00:01:27.694 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:27.694 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:27.694 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:27.694 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.694 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.694 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:27.694 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.694 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.694 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:27.694 17:49:35 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:27.694 17:49:35 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:27.694 17:49:35 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.694 17:49:35 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:27.694 17:49:35 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:27.694 17:49:35 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:01:27.694 17:49:35 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:27.694 17:49:35 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ NET_TYPE=phy 00:01:27.694 17:49:35 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ RUN_NIGHTLY=0 00:01:27.694 17:49:35 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:27.694 17:49:35 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:27.954 17:49:35 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:27.954 17:49:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:27.954 17:49:35 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:27.954 17:49:35 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:27.954 17:49:35 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:27.954 17:49:35 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:27.954 17:49:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.954 17:49:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.954 17:49:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.954 17:49:35 -- paths/export.sh@5 -- $ export PATH 00:01:27.954 17:49:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.954 17:49:35 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:27.954 17:49:35 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:27.954 17:49:35 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733762975.XXXXXX 00:01:27.954 17:49:35 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733762975.WlrK3V 00:01:27.954 17:49:35 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:27.954 17:49:35 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:27.954 17:49:35 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:27.954 17:49:35 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:27.954 17:49:35 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:27.954 17:49:35 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:27.954 17:49:35 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:27.954 17:49:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.954 17:49:35 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:27.954 17:49:35 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:27.954 17:49:35 -- pm/common@17 -- $ local monitor 00:01:27.954 17:49:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.954 17:49:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.954 17:49:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.954 17:49:35 -- pm/common@21 -- $ date +%s 00:01:27.954 17:49:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.954 17:49:35 -- pm/common@21 -- $ date +%s 00:01:27.954 17:49:35 -- pm/common@25 -- $ sleep 1 00:01:27.954 17:49:35 -- pm/common@21 -- $ date +%s 00:01:27.954 17:49:35 -- pm/common@21 -- $ date +%s 00:01:27.954 17:49:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733762975 00:01:27.954 17:49:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733762975 00:01:27.954 17:49:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733762975 00:01:27.954 17:49:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733762975 00:01:27.954 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733762975_collect-cpu-load.pm.log 00:01:27.954 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733762975_collect-vmstat.pm.log 00:01:27.954 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733762975_collect-cpu-temp.pm.log 00:01:27.954 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733762975_collect-bmc-pm.bmc.pm.log 00:01:28.893 17:49:36 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:28.893 17:49:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:28.893 17:49:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:28.893 17:49:36 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:28.893 17:49:36 -- spdk/autobuild.sh@16 -- $ date -u 00:01:28.893 Mon Dec 9 04:49:36 PM UTC 2024 00:01:28.893 17:49:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:28.893 v25.01-pre-313-g2e1d23f4b 00:01:28.893 17:49:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:28.893 17:49:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:28.893 17:49:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:28.893 17:49:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:28.893 17:49:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:28.893 17:49:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.893 ************************************ 00:01:28.893 START TEST ubsan 00:01:28.893 ************************************ 00:01:28.893 17:49:36 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:28.893 using ubsan 00:01:28.893 00:01:28.893 real 0m0.001s 00:01:28.893 user 0m0.000s 00:01:28.893 sys 0m0.000s 00:01:28.893 17:49:36 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:28.893 17:49:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:28.893 ************************************ 00:01:28.893 END TEST ubsan 00:01:28.893 ************************************ 00:01:29.152 17:49:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:29.152 17:49:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:29.152 17:49:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:29.152 17:49:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:29.152 17:49:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:29.152 17:49:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:29.152 17:49:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:29.152 17:49:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:29.152 17:49:36 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:29.152 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:29.152 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:29.411 Using 'verbs' RDMA provider 00:01:45.240 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:57.455 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:58.024 Creating mk/config.mk...done. 00:01:58.024 Creating mk/cc.flags.mk...done. 00:01:58.024 Type 'make' to build. 00:01:58.024 17:50:05 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:01:58.024 17:50:05 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:58.024 17:50:05 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:58.024 17:50:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.024 ************************************ 00:01:58.024 START TEST make 00:01:58.024 ************************************ 00:01:58.024 17:50:05 make -- common/autotest_common.sh@1129 -- $ make -j112 00:01:58.283 make[1]: Nothing to be done for 'all'. 00:02:06.405 The Meson build system 00:02:06.405 Version: 1.5.0 00:02:06.405 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:02:06.405 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:02:06.405 Build type: native build 00:02:06.405 Program cat found: YES (/usr/bin/cat) 00:02:06.405 Project name: DPDK 00:02:06.405 Project version: 24.03.0 00:02:06.405 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:06.405 C linker for the host machine: cc ld.bfd 2.40-14 00:02:06.405 Host machine cpu family: x86_64 00:02:06.405 Host machine cpu: x86_64 00:02:06.405 Message: ## Building in Developer Mode ## 00:02:06.405 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:06.405 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:06.405 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:06.405 Program python3 found: YES (/usr/bin/python3) 00:02:06.405 Program cat found: YES (/usr/bin/cat) 00:02:06.405 Compiler for C supports arguments -march=native: YES 00:02:06.405 Checking for size of "void *" : 8 00:02:06.405 Checking for size of "void *" : 8 (cached) 00:02:06.405 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:06.405 Library m found: YES 00:02:06.405 Library numa found: YES 00:02:06.405 Has header "numaif.h" : YES 00:02:06.405 Library fdt found: NO 00:02:06.405 Library execinfo found: NO 00:02:06.405 Has header "execinfo.h" : YES 00:02:06.405 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:06.405 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:06.405 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:06.405 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:06.405 Run-time dependency openssl found: YES 3.1.1 00:02:06.405 Run-time dependency libpcap found: YES 1.10.4 00:02:06.405 Has header "pcap.h" with dependency libpcap: YES 00:02:06.405 Compiler for C supports arguments -Wcast-qual: YES 00:02:06.405 Compiler for C supports arguments -Wdeprecated: YES 00:02:06.405 Compiler for C supports arguments -Wformat: YES 00:02:06.405 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:06.405 Compiler for C supports arguments -Wformat-security: NO 00:02:06.405 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:06.405 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:06.405 Compiler for C supports arguments -Wnested-externs: YES 00:02:06.405 Compiler for C supports arguments -Wold-style-definition: YES 00:02:06.405 Compiler for C supports arguments -Wpointer-arith: YES 00:02:06.405 Compiler for C supports arguments -Wsign-compare: YES 00:02:06.405 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:06.405 Compiler for C supports arguments -Wundef: YES 00:02:06.405 Compiler for C supports arguments -Wwrite-strings: YES 00:02:06.405 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:06.405 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:06.405 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:06.405 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:06.405 Program objdump found: YES (/usr/bin/objdump) 00:02:06.405 Compiler for C supports arguments -mavx512f: YES 00:02:06.405 Checking if "AVX512 checking" compiles: YES 00:02:06.405 Fetching value of define "__SSE4_2__" : 1 00:02:06.405 Fetching value of define "__AES__" : 1 00:02:06.405 Fetching value of define "__AVX__" : 1 00:02:06.405 Fetching value of define "__AVX2__" : 1 00:02:06.405 Fetching value of define "__AVX512BW__" : 1 00:02:06.405 Fetching value of define "__AVX512CD__" : 1 00:02:06.405 Fetching value of define "__AVX512DQ__" : 1 00:02:06.405 Fetching value of define "__AVX512F__" : 1 00:02:06.405 Fetching value of define "__AVX512VL__" : 1 00:02:06.405 Fetching value of define "__PCLMUL__" : 1 00:02:06.405 Fetching value of define "__RDRND__" : 1 00:02:06.405 Fetching value of define "__RDSEED__" : 1 00:02:06.405 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:06.405 Fetching value of define "__znver1__" : (undefined) 00:02:06.405 Fetching value of define "__znver2__" : (undefined) 00:02:06.405 Fetching value of define "__znver3__" : (undefined) 00:02:06.405 Fetching value of define "__znver4__" : (undefined) 00:02:06.405 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:06.406 Message: lib/log: Defining dependency "log" 00:02:06.406 Message: lib/kvargs: Defining dependency "kvargs" 00:02:06.406 Message: lib/telemetry: Defining dependency "telemetry" 00:02:06.406 Checking for function "getentropy" : NO 00:02:06.406 Message: lib/eal: Defining dependency "eal" 00:02:06.406 Message: lib/ring: Defining dependency "ring" 00:02:06.406 Message: lib/rcu: Defining dependency "rcu" 00:02:06.406 Message: lib/mempool: Defining dependency "mempool" 00:02:06.406 Message: lib/mbuf: Defining dependency "mbuf" 00:02:06.406 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:06.406 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:06.406 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:06.406 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:06.406 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:06.406 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:06.406 Compiler for C supports arguments -mpclmul: YES 00:02:06.406 Compiler for C supports arguments -maes: YES 00:02:06.406 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:06.406 Compiler for C supports arguments -mavx512bw: YES 00:02:06.406 Compiler for C supports arguments -mavx512dq: YES 00:02:06.406 Compiler for C supports arguments -mavx512vl: YES 00:02:06.406 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:06.406 Compiler for C supports arguments -mavx2: YES 00:02:06.406 Compiler for C supports arguments -mavx: YES 00:02:06.406 Message: lib/net: Defining dependency "net" 00:02:06.406 Message: lib/meter: Defining dependency "meter" 00:02:06.406 Message: lib/ethdev: Defining dependency "ethdev" 00:02:06.406 Message: lib/pci: Defining dependency "pci" 00:02:06.406 Message: lib/cmdline: Defining dependency "cmdline" 00:02:06.406 Message: lib/hash: Defining dependency "hash" 00:02:06.406 Message: lib/timer: Defining dependency "timer" 00:02:06.406 Message: lib/compressdev: Defining dependency "compressdev" 00:02:06.406 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:06.406 Message: lib/dmadev: Defining dependency "dmadev" 00:02:06.406 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:06.406 Message: lib/power: Defining dependency "power" 00:02:06.406 Message: lib/reorder: Defining dependency "reorder" 00:02:06.406 Message: lib/security: Defining dependency "security" 00:02:06.406 Has header "linux/userfaultfd.h" : YES 00:02:06.406 Has header "linux/vduse.h" : YES 00:02:06.406 Message: lib/vhost: Defining dependency "vhost" 00:02:06.406 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:06.406 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:06.406 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:06.406 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:06.406 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:06.406 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:06.406 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:06.406 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:06.406 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:06.406 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:06.406 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:06.406 Configuring doxy-api-html.conf using configuration 00:02:06.406 Configuring doxy-api-man.conf using configuration 00:02:06.406 Program mandb found: YES (/usr/bin/mandb) 00:02:06.406 Program sphinx-build found: NO 00:02:06.406 Configuring rte_build_config.h using configuration 00:02:06.406 Message: 00:02:06.406 ================= 00:02:06.406 Applications Enabled 00:02:06.406 ================= 00:02:06.406 00:02:06.406 apps: 00:02:06.406 00:02:06.406 00:02:06.406 Message: 00:02:06.406 ================= 00:02:06.406 Libraries Enabled 00:02:06.406 ================= 00:02:06.406 00:02:06.406 libs: 00:02:06.406 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:06.406 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:06.406 cryptodev, dmadev, power, reorder, security, vhost, 00:02:06.406 00:02:06.406 Message: 00:02:06.406 =============== 00:02:06.406 Drivers Enabled 00:02:06.406 =============== 00:02:06.406 00:02:06.406 common: 00:02:06.406 00:02:06.406 bus: 00:02:06.406 pci, vdev, 00:02:06.406 mempool: 00:02:06.406 ring, 00:02:06.406 dma: 00:02:06.406 00:02:06.406 net: 00:02:06.406 00:02:06.406 crypto: 00:02:06.406 00:02:06.406 compress: 00:02:06.406 00:02:06.406 vdpa: 00:02:06.406 00:02:06.406 00:02:06.406 Message: 00:02:06.406 ================= 00:02:06.406 Content Skipped 00:02:06.406 ================= 00:02:06.406 00:02:06.406 apps: 00:02:06.406 dumpcap: explicitly disabled via build config 00:02:06.406 graph: explicitly disabled via build config 00:02:06.406 pdump: explicitly disabled via build config 00:02:06.406 proc-info: explicitly disabled via build config 00:02:06.406 test-acl: explicitly disabled via build config 00:02:06.406 test-bbdev: explicitly disabled via build config 00:02:06.406 test-cmdline: explicitly disabled via build config 00:02:06.406 test-compress-perf: explicitly disabled via build config 00:02:06.406 test-crypto-perf: explicitly disabled via build config 00:02:06.406 test-dma-perf: explicitly disabled via build config 00:02:06.406 test-eventdev: explicitly disabled via build config 00:02:06.406 test-fib: explicitly disabled via build config 00:02:06.406 test-flow-perf: explicitly disabled via build config 00:02:06.406 test-gpudev: explicitly disabled via build config 00:02:06.406 test-mldev: explicitly disabled via build config 00:02:06.406 test-pipeline: explicitly disabled via build config 00:02:06.406 test-pmd: explicitly disabled via build config 00:02:06.406 test-regex: explicitly disabled via build config 00:02:06.406 test-sad: explicitly disabled via build config 00:02:06.406 test-security-perf: explicitly disabled via build config 00:02:06.406 00:02:06.406 libs: 00:02:06.406 argparse: explicitly disabled via build config 00:02:06.406 metrics: explicitly disabled via build config 00:02:06.406 acl: explicitly disabled via build config 00:02:06.406 bbdev: explicitly disabled via build config 00:02:06.406 bitratestats: explicitly disabled via build config 00:02:06.406 bpf: explicitly disabled via build config 00:02:06.406 cfgfile: explicitly disabled via build config 00:02:06.406 distributor: explicitly disabled via build config 00:02:06.406 efd: explicitly disabled via build config 00:02:06.406 eventdev: explicitly disabled via build config 00:02:06.406 dispatcher: explicitly disabled via build config 00:02:06.406 gpudev: explicitly disabled via build config 00:02:06.406 gro: explicitly disabled via build config 00:02:06.406 gso: explicitly disabled via build config 00:02:06.406 ip_frag: explicitly disabled via build config 00:02:06.406 jobstats: explicitly disabled via build config 00:02:06.406 latencystats: explicitly disabled via build config 00:02:06.406 lpm: explicitly disabled via build config 00:02:06.407 member: explicitly disabled via build config 00:02:06.407 pcapng: explicitly disabled via build config 00:02:06.407 rawdev: explicitly disabled via build config 00:02:06.407 regexdev: explicitly disabled via build config 00:02:06.407 mldev: explicitly disabled via build config 00:02:06.407 rib: explicitly disabled via build config 00:02:06.407 sched: explicitly disabled via build config 00:02:06.407 stack: explicitly disabled via build config 00:02:06.407 ipsec: explicitly disabled via build config 00:02:06.407 pdcp: explicitly disabled via build config 00:02:06.407 fib: explicitly disabled via build config 00:02:06.407 port: explicitly disabled via build config 00:02:06.407 pdump: explicitly disabled via build config 00:02:06.407 table: explicitly disabled via build config 00:02:06.407 pipeline: explicitly disabled via build config 00:02:06.407 graph: explicitly disabled via build config 00:02:06.407 node: explicitly disabled via build config 00:02:06.407 00:02:06.407 drivers: 00:02:06.407 common/cpt: not in enabled drivers build config 00:02:06.407 common/dpaax: not in enabled drivers build config 00:02:06.407 common/iavf: not in enabled drivers build config 00:02:06.407 common/idpf: not in enabled drivers build config 00:02:06.407 common/ionic: not in enabled drivers build config 00:02:06.407 common/mvep: not in enabled drivers build config 00:02:06.407 common/octeontx: not in enabled drivers build config 00:02:06.407 bus/auxiliary: not in enabled drivers build config 00:02:06.407 bus/cdx: not in enabled drivers build config 00:02:06.407 bus/dpaa: not in enabled drivers build config 00:02:06.407 bus/fslmc: not in enabled drivers build config 00:02:06.407 bus/ifpga: not in enabled drivers build config 00:02:06.407 bus/platform: not in enabled drivers build config 00:02:06.407 bus/uacce: not in enabled drivers build config 00:02:06.407 bus/vmbus: not in enabled drivers build config 00:02:06.407 common/cnxk: not in enabled drivers build config 00:02:06.407 common/mlx5: not in enabled drivers build config 00:02:06.407 common/nfp: not in enabled drivers build config 00:02:06.407 common/nitrox: not in enabled drivers build config 00:02:06.407 common/qat: not in enabled drivers build config 00:02:06.407 common/sfc_efx: not in enabled drivers build config 00:02:06.407 mempool/bucket: not in enabled drivers build config 00:02:06.407 mempool/cnxk: not in enabled drivers build config 00:02:06.407 mempool/dpaa: not in enabled drivers build config 00:02:06.407 mempool/dpaa2: not in enabled drivers build config 00:02:06.407 mempool/octeontx: not in enabled drivers build config 00:02:06.407 mempool/stack: not in enabled drivers build config 00:02:06.407 dma/cnxk: not in enabled drivers build config 00:02:06.407 dma/dpaa: not in enabled drivers build config 00:02:06.407 dma/dpaa2: not in enabled drivers build config 00:02:06.407 dma/hisilicon: not in enabled drivers build config 00:02:06.407 dma/idxd: not in enabled drivers build config 00:02:06.407 dma/ioat: not in enabled drivers build config 00:02:06.407 dma/skeleton: not in enabled drivers build config 00:02:06.407 net/af_packet: not in enabled drivers build config 00:02:06.407 net/af_xdp: not in enabled drivers build config 00:02:06.407 net/ark: not in enabled drivers build config 00:02:06.407 net/atlantic: not in enabled drivers build config 00:02:06.407 net/avp: not in enabled drivers build config 00:02:06.407 net/axgbe: not in enabled drivers build config 00:02:06.407 net/bnx2x: not in enabled drivers build config 00:02:06.407 net/bnxt: not in enabled drivers build config 00:02:06.407 net/bonding: not in enabled drivers build config 00:02:06.407 net/cnxk: not in enabled drivers build config 00:02:06.407 net/cpfl: not in enabled drivers build config 00:02:06.407 net/cxgbe: not in enabled drivers build config 00:02:06.407 net/dpaa: not in enabled drivers build config 00:02:06.407 net/dpaa2: not in enabled drivers build config 00:02:06.407 net/e1000: not in enabled drivers build config 00:02:06.407 net/ena: not in enabled drivers build config 00:02:06.407 net/enetc: not in enabled drivers build config 00:02:06.407 net/enetfec: not in enabled drivers build config 00:02:06.407 net/enic: not in enabled drivers build config 00:02:06.407 net/failsafe: not in enabled drivers build config 00:02:06.407 net/fm10k: not in enabled drivers build config 00:02:06.407 net/gve: not in enabled drivers build config 00:02:06.407 net/hinic: not in enabled drivers build config 00:02:06.407 net/hns3: not in enabled drivers build config 00:02:06.407 net/i40e: not in enabled drivers build config 00:02:06.407 net/iavf: not in enabled drivers build config 00:02:06.407 net/ice: not in enabled drivers build config 00:02:06.407 net/idpf: not in enabled drivers build config 00:02:06.407 net/igc: not in enabled drivers build config 00:02:06.407 net/ionic: not in enabled drivers build config 00:02:06.407 net/ipn3ke: not in enabled drivers build config 00:02:06.407 net/ixgbe: not in enabled drivers build config 00:02:06.407 net/mana: not in enabled drivers build config 00:02:06.407 net/memif: not in enabled drivers build config 00:02:06.407 net/mlx4: not in enabled drivers build config 00:02:06.407 net/mlx5: not in enabled drivers build config 00:02:06.407 net/mvneta: not in enabled drivers build config 00:02:06.407 net/mvpp2: not in enabled drivers build config 00:02:06.407 net/netvsc: not in enabled drivers build config 00:02:06.407 net/nfb: not in enabled drivers build config 00:02:06.407 net/nfp: not in enabled drivers build config 00:02:06.407 net/ngbe: not in enabled drivers build config 00:02:06.407 net/null: not in enabled drivers build config 00:02:06.407 net/octeontx: not in enabled drivers build config 00:02:06.407 net/octeon_ep: not in enabled drivers build config 00:02:06.407 net/pcap: not in enabled drivers build config 00:02:06.407 net/pfe: not in enabled drivers build config 00:02:06.407 net/qede: not in enabled drivers build config 00:02:06.407 net/ring: not in enabled drivers build config 00:02:06.407 net/sfc: not in enabled drivers build config 00:02:06.407 net/softnic: not in enabled drivers build config 00:02:06.407 net/tap: not in enabled drivers build config 00:02:06.407 net/thunderx: not in enabled drivers build config 00:02:06.407 net/txgbe: not in enabled drivers build config 00:02:06.407 net/vdev_netvsc: not in enabled drivers build config 00:02:06.407 net/vhost: not in enabled drivers build config 00:02:06.407 net/virtio: not in enabled drivers build config 00:02:06.407 net/vmxnet3: not in enabled drivers build config 00:02:06.407 raw/*: missing internal dependency, "rawdev" 00:02:06.407 crypto/armv8: not in enabled drivers build config 00:02:06.407 crypto/bcmfs: not in enabled drivers build config 00:02:06.407 crypto/caam_jr: not in enabled drivers build config 00:02:06.407 crypto/ccp: not in enabled drivers build config 00:02:06.407 crypto/cnxk: not in enabled drivers build config 00:02:06.407 crypto/dpaa_sec: not in enabled drivers build config 00:02:06.407 crypto/dpaa2_sec: not in enabled drivers build config 00:02:06.407 crypto/ipsec_mb: not in enabled drivers build config 00:02:06.407 crypto/mlx5: not in enabled drivers build config 00:02:06.407 crypto/mvsam: not in enabled drivers build config 00:02:06.407 crypto/nitrox: not in enabled drivers build config 00:02:06.407 crypto/null: not in enabled drivers build config 00:02:06.407 crypto/octeontx: not in enabled drivers build config 00:02:06.407 crypto/openssl: not in enabled drivers build config 00:02:06.407 crypto/scheduler: not in enabled drivers build config 00:02:06.407 crypto/uadk: not in enabled drivers build config 00:02:06.407 crypto/virtio: not in enabled drivers build config 00:02:06.407 compress/isal: not in enabled drivers build config 00:02:06.407 compress/mlx5: not in enabled drivers build config 00:02:06.407 compress/nitrox: not in enabled drivers build config 00:02:06.407 compress/octeontx: not in enabled drivers build config 00:02:06.407 compress/zlib: not in enabled drivers build config 00:02:06.407 regex/*: missing internal dependency, "regexdev" 00:02:06.407 ml/*: missing internal dependency, "mldev" 00:02:06.407 vdpa/ifc: not in enabled drivers build config 00:02:06.407 vdpa/mlx5: not in enabled drivers build config 00:02:06.407 vdpa/nfp: not in enabled drivers build config 00:02:06.407 vdpa/sfc: not in enabled drivers build config 00:02:06.407 event/*: missing internal dependency, "eventdev" 00:02:06.407 baseband/*: missing internal dependency, "bbdev" 00:02:06.407 gpu/*: missing internal dependency, "gpudev" 00:02:06.407 00:02:06.407 00:02:06.667 Build targets in project: 85 00:02:06.667 00:02:06.667 DPDK 24.03.0 00:02:06.667 00:02:06.667 User defined options 00:02:06.667 buildtype : debug 00:02:06.667 default_library : shared 00:02:06.667 libdir : lib 00:02:06.667 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:06.667 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:06.667 c_link_args : 00:02:06.667 cpu_instruction_set: native 00:02:06.667 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:06.667 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:06.667 enable_docs : false 00:02:06.667 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:06.667 enable_kmods : false 00:02:06.667 max_lcores : 128 00:02:06.667 tests : false 00:02:06.667 00:02:06.667 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:07.248 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:02:07.248 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:07.248 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:07.248 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:07.248 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:07.507 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:07.507 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:07.507 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:07.507 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:07.507 [9/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:07.507 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:07.507 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:07.507 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:07.507 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:07.507 [14/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:07.507 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:07.507 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:07.507 [17/268] Linking static target lib/librte_kvargs.a 00:02:07.507 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:07.507 [19/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:07.507 [20/268] Linking static target lib/librte_log.a 00:02:07.507 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:07.507 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:07.507 [23/268] Linking static target lib/librte_pci.a 00:02:07.507 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:07.507 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:07.507 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:07.507 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:07.507 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:07.507 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:07.507 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:07.507 [31/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:07.765 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:07.765 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:07.765 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:07.765 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:07.765 [36/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:07.766 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:07.766 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:07.766 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:07.766 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:07.766 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:07.766 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:07.766 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:07.766 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:07.766 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:07.766 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:07.766 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:07.766 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:07.766 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:07.766 [50/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:08.025 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:08.025 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:08.025 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:08.025 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:08.025 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:08.025 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:08.025 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:08.025 [58/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:08.025 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:08.025 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:08.025 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:08.025 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:08.025 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:08.025 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:08.025 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:08.025 [66/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:08.025 [67/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:08.025 [68/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:08.025 [69/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:08.025 [70/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:08.025 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:08.025 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:08.025 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:08.025 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:08.025 [75/268] Linking static target lib/librte_meter.a 00:02:08.025 [76/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:08.025 [77/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:08.025 [78/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:08.025 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:08.025 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:08.025 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:08.025 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:08.025 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:08.025 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:08.025 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:08.025 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:08.025 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:08.025 [88/268] Linking static target lib/librte_ring.a 00:02:08.025 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:08.025 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:08.025 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:08.025 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:08.025 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:08.025 [94/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:08.025 [95/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:08.025 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:08.025 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:08.025 [98/268] Linking static target lib/librte_cmdline.a 00:02:08.025 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:08.025 [100/268] Linking static target lib/librte_telemetry.a 00:02:08.025 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:08.025 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:08.025 [103/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:08.025 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:08.025 [105/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:08.025 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:08.025 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:08.025 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:08.025 [109/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:08.025 [110/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:08.025 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:08.025 [112/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.025 [113/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:08.025 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:08.025 [115/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:08.025 [116/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.025 [117/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:08.025 [118/268] Linking static target lib/librte_mempool.a 00:02:08.025 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:08.025 [120/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:08.025 [121/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:08.025 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:08.025 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:08.025 [124/268] Linking static target lib/librte_rcu.a 00:02:08.025 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:08.025 [126/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:08.025 [127/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:08.025 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:08.025 [129/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:08.025 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:08.025 [131/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:08.025 [132/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:08.025 [133/268] Linking static target lib/librte_net.a 00:02:08.025 [134/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:08.025 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:08.025 [136/268] Linking static target lib/librte_timer.a 00:02:08.025 [137/268] Linking static target lib/librte_eal.a 00:02:08.025 [138/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:08.025 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:08.025 [140/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:08.025 [141/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:08.025 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:08.025 [143/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:08.025 [144/268] Linking static target lib/librte_dmadev.a 00:02:08.025 [145/268] Linking static target lib/librte_compressdev.a 00:02:08.025 [146/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:08.025 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:08.025 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:08.025 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:08.025 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:08.285 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:08.285 [152/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:08.285 [153/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.285 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:08.285 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:08.285 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:08.285 [157/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:08.285 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:08.285 [159/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:08.285 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:08.285 [161/268] Linking static target lib/librte_mbuf.a 00:02:08.285 [162/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.285 [163/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.285 [164/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:08.285 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:08.285 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:08.285 [167/268] Linking static target lib/librte_reorder.a 00:02:08.285 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:08.285 [169/268] Linking target lib/librte_log.so.24.1 00:02:08.285 [170/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:08.285 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:08.285 [172/268] Linking static target lib/librte_hash.a 00:02:08.285 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:08.285 [174/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:08.285 [175/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:08.285 [176/268] Linking static target lib/librte_security.a 00:02:08.285 [177/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:08.285 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:08.544 [179/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.544 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:08.544 [181/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:08.544 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:08.544 [183/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.544 [184/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:08.544 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:08.544 [186/268] Linking static target lib/librte_power.a 00:02:08.544 [187/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:08.544 [188/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:08.544 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:08.544 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:08.544 [191/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:08.544 [192/268] Linking static target lib/librte_cryptodev.a 00:02:08.544 [193/268] Linking target lib/librte_kvargs.so.24.1 00:02:08.544 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:08.544 [195/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:08.544 [196/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:08.544 [197/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.544 [198/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.544 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:08.544 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:08.544 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:08.544 [202/268] Linking target lib/librte_telemetry.so.24.1 00:02:08.544 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:08.544 [204/268] Linking static target drivers/librte_bus_vdev.a 00:02:08.804 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.804 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.804 [207/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:08.804 [208/268] Linking static target drivers/librte_bus_pci.a 00:02:08.804 [209/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:08.804 [210/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:08.804 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.804 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.804 [213/268] Linking static target drivers/librte_mempool_ring.a 00:02:08.804 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.804 [215/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.804 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:08.804 [217/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.063 [218/268] Linking static target lib/librte_ethdev.a 00:02:09.063 [219/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.063 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.063 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.063 [222/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:09.063 [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.322 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.581 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.581 [226/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.581 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.150 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:10.150 [229/268] Linking static target lib/librte_vhost.a 00:02:10.719 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.625 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.282 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.184 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.184 [234/268] Linking target lib/librte_eal.so.24.1 00:02:21.184 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:21.184 [236/268] Linking target lib/librte_pci.so.24.1 00:02:21.184 [237/268] Linking target lib/librte_ring.so.24.1 00:02:21.184 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:21.184 [239/268] Linking target lib/librte_meter.so.24.1 00:02:21.184 [240/268] Linking target lib/librte_timer.so.24.1 00:02:21.184 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:21.443 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:21.443 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:21.443 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:21.443 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:21.443 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:21.443 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:21.443 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:21.443 [249/268] Linking target lib/librte_rcu.so.24.1 00:02:21.701 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:21.701 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:21.701 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:21.701 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:21.701 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:21.959 [255/268] Linking target lib/librte_net.so.24.1 00:02:21.959 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:21.959 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:21.959 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:21.959 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:21.959 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:21.959 [261/268] Linking target lib/librte_security.so.24.1 00:02:21.959 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:21.959 [263/268] Linking target lib/librte_hash.so.24.1 00:02:21.959 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:22.217 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:22.217 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:22.217 [267/268] Linking target lib/librte_power.so.24.1 00:02:22.217 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:22.217 INFO: autodetecting backend as ninja 00:02:22.217 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:28.783 CC lib/ut_mock/mock.o 00:02:28.783 CC lib/log/log.o 00:02:28.783 CC lib/log/log_flags.o 00:02:28.783 CC lib/log/log_deprecated.o 00:02:28.783 CC lib/ut/ut.o 00:02:28.783 LIB libspdk_log.a 00:02:28.783 LIB libspdk_ut_mock.a 00:02:28.783 LIB libspdk_ut.a 00:02:28.783 SO libspdk_ut_mock.so.6.0 00:02:28.783 SO libspdk_ut.so.2.0 00:02:28.784 SO libspdk_log.so.7.1 00:02:28.784 SYMLINK libspdk_ut.so 00:02:28.784 SYMLINK libspdk_ut_mock.so 00:02:28.784 SYMLINK libspdk_log.so 00:02:28.784 CC lib/util/base64.o 00:02:28.784 CXX lib/trace_parser/trace.o 00:02:28.784 CC lib/dma/dma.o 00:02:28.784 CC lib/util/bit_array.o 00:02:28.784 CC lib/util/cpuset.o 00:02:28.784 CC lib/util/crc32.o 00:02:28.784 CC lib/util/crc16.o 00:02:28.784 CC lib/util/crc32c.o 00:02:28.784 CC lib/ioat/ioat.o 00:02:28.784 CC lib/util/crc32_ieee.o 00:02:28.784 CC lib/util/crc64.o 00:02:28.784 CC lib/util/dif.o 00:02:28.784 CC lib/util/fd.o 00:02:28.784 CC lib/util/fd_group.o 00:02:28.784 CC lib/util/file.o 00:02:28.784 CC lib/util/hexlify.o 00:02:28.784 CC lib/util/iov.o 00:02:28.784 CC lib/util/math.o 00:02:28.784 CC lib/util/net.o 00:02:28.784 CC lib/util/pipe.o 00:02:28.784 CC lib/util/strerror_tls.o 00:02:28.784 CC lib/util/string.o 00:02:28.784 CC lib/util/uuid.o 00:02:28.784 CC lib/util/xor.o 00:02:28.784 CC lib/util/zipf.o 00:02:28.784 CC lib/util/md5.o 00:02:28.784 CC lib/vfio_user/host/vfio_user_pci.o 00:02:28.784 CC lib/vfio_user/host/vfio_user.o 00:02:29.042 LIB libspdk_dma.a 00:02:29.043 SO libspdk_dma.so.5.0 00:02:29.043 LIB libspdk_ioat.a 00:02:29.043 SYMLINK libspdk_dma.so 00:02:29.043 SO libspdk_ioat.so.7.0 00:02:29.043 SYMLINK libspdk_ioat.so 00:02:29.043 LIB libspdk_vfio_user.a 00:02:29.043 SO libspdk_vfio_user.so.5.0 00:02:29.301 LIB libspdk_util.a 00:02:29.301 SYMLINK libspdk_vfio_user.so 00:02:29.301 SO libspdk_util.so.10.1 00:02:29.301 SYMLINK libspdk_util.so 00:02:29.301 LIB libspdk_trace_parser.a 00:02:29.561 SO libspdk_trace_parser.so.6.0 00:02:29.561 SYMLINK libspdk_trace_parser.so 00:02:29.820 CC lib/conf/conf.o 00:02:29.820 CC lib/json/json_util.o 00:02:29.820 CC lib/json/json_parse.o 00:02:29.820 CC lib/json/json_write.o 00:02:29.820 CC lib/vmd/vmd.o 00:02:29.820 CC lib/rdma_utils/rdma_utils.o 00:02:29.820 CC lib/idxd/idxd.o 00:02:29.820 CC lib/vmd/led.o 00:02:29.820 CC lib/idxd/idxd_user.o 00:02:29.820 CC lib/env_dpdk/env.o 00:02:29.820 CC lib/idxd/idxd_kernel.o 00:02:29.820 CC lib/env_dpdk/memory.o 00:02:29.820 CC lib/env_dpdk/pci.o 00:02:29.820 CC lib/env_dpdk/init.o 00:02:29.820 CC lib/env_dpdk/threads.o 00:02:29.820 CC lib/env_dpdk/pci_ioat.o 00:02:29.820 CC lib/env_dpdk/pci_virtio.o 00:02:29.820 CC lib/env_dpdk/pci_vmd.o 00:02:29.820 CC lib/env_dpdk/pci_idxd.o 00:02:29.820 CC lib/env_dpdk/pci_event.o 00:02:29.820 CC lib/env_dpdk/sigbus_handler.o 00:02:29.820 CC lib/env_dpdk/pci_dpdk.o 00:02:29.820 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:29.820 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:30.078 LIB libspdk_conf.a 00:02:30.078 SO libspdk_conf.so.6.0 00:02:30.078 LIB libspdk_json.a 00:02:30.078 LIB libspdk_rdma_utils.a 00:02:30.078 SO libspdk_json.so.6.0 00:02:30.078 SYMLINK libspdk_conf.so 00:02:30.078 SO libspdk_rdma_utils.so.1.0 00:02:30.078 SYMLINK libspdk_json.so 00:02:30.078 SYMLINK libspdk_rdma_utils.so 00:02:30.337 LIB libspdk_idxd.a 00:02:30.337 SO libspdk_idxd.so.12.1 00:02:30.337 LIB libspdk_vmd.a 00:02:30.337 SO libspdk_vmd.so.6.0 00:02:30.337 SYMLINK libspdk_idxd.so 00:02:30.337 SYMLINK libspdk_vmd.so 00:02:30.596 CC lib/jsonrpc/jsonrpc_server.o 00:02:30.596 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:30.596 CC lib/jsonrpc/jsonrpc_client.o 00:02:30.596 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:30.596 CC lib/rdma_provider/common.o 00:02:30.596 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:30.596 LIB libspdk_jsonrpc.a 00:02:30.596 LIB libspdk_rdma_provider.a 00:02:30.856 SO libspdk_rdma_provider.so.7.0 00:02:30.856 SO libspdk_jsonrpc.so.6.0 00:02:30.856 LIB libspdk_env_dpdk.a 00:02:30.856 SYMLINK libspdk_jsonrpc.so 00:02:30.856 SYMLINK libspdk_rdma_provider.so 00:02:30.856 SO libspdk_env_dpdk.so.15.1 00:02:30.856 SYMLINK libspdk_env_dpdk.so 00:02:31.115 CC lib/rpc/rpc.o 00:02:31.374 LIB libspdk_rpc.a 00:02:31.374 SO libspdk_rpc.so.6.0 00:02:31.374 SYMLINK libspdk_rpc.so 00:02:31.942 CC lib/keyring/keyring.o 00:02:31.942 CC lib/keyring/keyring_rpc.o 00:02:31.942 CC lib/trace/trace.o 00:02:31.942 CC lib/trace/trace_flags.o 00:02:31.942 CC lib/trace/trace_rpc.o 00:02:31.942 CC lib/notify/notify.o 00:02:31.942 CC lib/notify/notify_rpc.o 00:02:31.942 LIB libspdk_notify.a 00:02:31.942 LIB libspdk_keyring.a 00:02:31.942 SO libspdk_notify.so.6.0 00:02:31.942 LIB libspdk_trace.a 00:02:32.201 SO libspdk_keyring.so.2.0 00:02:32.201 SO libspdk_trace.so.11.0 00:02:32.201 SYMLINK libspdk_notify.so 00:02:32.201 SYMLINK libspdk_keyring.so 00:02:32.201 SYMLINK libspdk_trace.so 00:02:32.460 CC lib/thread/thread.o 00:02:32.460 CC lib/thread/iobuf.o 00:02:32.460 CC lib/sock/sock.o 00:02:32.460 CC lib/sock/sock_rpc.o 00:02:33.026 LIB libspdk_sock.a 00:02:33.027 SO libspdk_sock.so.10.0 00:02:33.027 SYMLINK libspdk_sock.so 00:02:33.285 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:33.285 CC lib/nvme/nvme_ctrlr.o 00:02:33.285 CC lib/nvme/nvme_fabric.o 00:02:33.285 CC lib/nvme/nvme_ns_cmd.o 00:02:33.285 CC lib/nvme/nvme_ns.o 00:02:33.285 CC lib/nvme/nvme_pcie_common.o 00:02:33.285 CC lib/nvme/nvme_pcie.o 00:02:33.285 CC lib/nvme/nvme_qpair.o 00:02:33.285 CC lib/nvme/nvme.o 00:02:33.285 CC lib/nvme/nvme_transport.o 00:02:33.285 CC lib/nvme/nvme_quirks.o 00:02:33.285 CC lib/nvme/nvme_discovery.o 00:02:33.285 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:33.285 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:33.285 CC lib/nvme/nvme_tcp.o 00:02:33.285 CC lib/nvme/nvme_io_msg.o 00:02:33.285 CC lib/nvme/nvme_opal.o 00:02:33.285 CC lib/nvme/nvme_poll_group.o 00:02:33.285 CC lib/nvme/nvme_zns.o 00:02:33.285 CC lib/nvme/nvme_stubs.o 00:02:33.285 CC lib/nvme/nvme_auth.o 00:02:33.285 CC lib/nvme/nvme_cuse.o 00:02:33.285 CC lib/nvme/nvme_rdma.o 00:02:33.543 LIB libspdk_thread.a 00:02:33.543 SO libspdk_thread.so.11.0 00:02:33.801 SYMLINK libspdk_thread.so 00:02:34.060 CC lib/accel/accel.o 00:02:34.060 CC lib/accel/accel_rpc.o 00:02:34.060 CC lib/init/json_config.o 00:02:34.060 CC lib/accel/accel_sw.o 00:02:34.060 CC lib/init/subsystem.o 00:02:34.060 CC lib/init/subsystem_rpc.o 00:02:34.060 CC lib/blob/blobstore.o 00:02:34.060 CC lib/blob/request.o 00:02:34.060 CC lib/init/rpc.o 00:02:34.060 CC lib/blob/zeroes.o 00:02:34.060 CC lib/blob/blob_bs_dev.o 00:02:34.060 CC lib/fsdev/fsdev_io.o 00:02:34.060 CC lib/fsdev/fsdev.o 00:02:34.060 CC lib/fsdev/fsdev_rpc.o 00:02:34.060 CC lib/virtio/virtio.o 00:02:34.060 CC lib/virtio/virtio_vhost_user.o 00:02:34.060 CC lib/virtio/virtio_vfio_user.o 00:02:34.060 CC lib/virtio/virtio_pci.o 00:02:34.319 LIB libspdk_init.a 00:02:34.319 SO libspdk_init.so.6.0 00:02:34.319 LIB libspdk_virtio.a 00:02:34.319 SYMLINK libspdk_init.so 00:02:34.319 SO libspdk_virtio.so.7.0 00:02:34.577 SYMLINK libspdk_virtio.so 00:02:34.577 LIB libspdk_fsdev.a 00:02:34.577 SO libspdk_fsdev.so.2.0 00:02:34.577 SYMLINK libspdk_fsdev.so 00:02:34.836 CC lib/event/app.o 00:02:34.836 CC lib/event/reactor.o 00:02:34.836 CC lib/event/log_rpc.o 00:02:34.836 CC lib/event/app_rpc.o 00:02:34.836 CC lib/event/scheduler_static.o 00:02:34.836 LIB libspdk_accel.a 00:02:34.836 SO libspdk_accel.so.16.0 00:02:34.836 LIB libspdk_nvme.a 00:02:35.095 SYMLINK libspdk_accel.so 00:02:35.095 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:35.095 SO libspdk_nvme.so.15.0 00:02:35.095 LIB libspdk_event.a 00:02:35.095 SO libspdk_event.so.14.0 00:02:35.095 SYMLINK libspdk_event.so 00:02:35.354 SYMLINK libspdk_nvme.so 00:02:35.354 CC lib/bdev/bdev.o 00:02:35.354 CC lib/bdev/bdev_rpc.o 00:02:35.354 CC lib/bdev/bdev_zone.o 00:02:35.354 CC lib/bdev/part.o 00:02:35.354 CC lib/bdev/scsi_nvme.o 00:02:35.613 LIB libspdk_fuse_dispatcher.a 00:02:35.613 SO libspdk_fuse_dispatcher.so.1.0 00:02:35.613 SYMLINK libspdk_fuse_dispatcher.so 00:02:36.181 LIB libspdk_blob.a 00:02:36.181 SO libspdk_blob.so.12.0 00:02:36.439 SYMLINK libspdk_blob.so 00:02:36.698 CC lib/lvol/lvol.o 00:02:36.698 CC lib/blobfs/blobfs.o 00:02:36.698 CC lib/blobfs/tree.o 00:02:37.264 LIB libspdk_bdev.a 00:02:37.264 SO libspdk_bdev.so.17.0 00:02:37.265 LIB libspdk_blobfs.a 00:02:37.265 SO libspdk_blobfs.so.11.0 00:02:37.265 LIB libspdk_lvol.a 00:02:37.265 SYMLINK libspdk_bdev.so 00:02:37.523 SYMLINK libspdk_blobfs.so 00:02:37.523 SO libspdk_lvol.so.11.0 00:02:37.523 SYMLINK libspdk_lvol.so 00:02:37.785 CC lib/scsi/dev.o 00:02:37.785 CC lib/scsi/lun.o 00:02:37.785 CC lib/scsi/scsi.o 00:02:37.785 CC lib/scsi/port.o 00:02:37.785 CC lib/ftl/ftl_core.o 00:02:37.785 CC lib/nvmf/ctrlr.o 00:02:37.785 CC lib/ftl/ftl_init.o 00:02:37.785 CC lib/scsi/scsi_bdev.o 00:02:37.785 CC lib/nvmf/ctrlr_discovery.o 00:02:37.785 CC lib/ftl/ftl_layout.o 00:02:37.785 CC lib/scsi/scsi_pr.o 00:02:37.785 CC lib/ftl/ftl_debug.o 00:02:37.785 CC lib/ftl/ftl_io.o 00:02:37.785 CC lib/scsi/scsi_rpc.o 00:02:37.785 CC lib/nbd/nbd.o 00:02:37.785 CC lib/nvmf/ctrlr_bdev.o 00:02:37.785 CC lib/nvmf/subsystem.o 00:02:37.785 CC lib/nbd/nbd_rpc.o 00:02:37.785 CC lib/scsi/task.o 00:02:37.785 CC lib/ublk/ublk.o 00:02:37.785 CC lib/ftl/ftl_sb.o 00:02:37.785 CC lib/nvmf/nvmf.o 00:02:37.785 CC lib/ftl/ftl_l2p.o 00:02:37.785 CC lib/ublk/ublk_rpc.o 00:02:37.785 CC lib/nvmf/nvmf_rpc.o 00:02:37.785 CC lib/ftl/ftl_l2p_flat.o 00:02:37.785 CC lib/nvmf/transport.o 00:02:37.785 CC lib/ftl/ftl_nv_cache.o 00:02:37.785 CC lib/nvmf/tcp.o 00:02:37.785 CC lib/ftl/ftl_band.o 00:02:37.785 CC lib/nvmf/stubs.o 00:02:37.785 CC lib/ftl/ftl_band_ops.o 00:02:37.785 CC lib/ftl/ftl_writer.o 00:02:37.785 CC lib/nvmf/mdns_server.o 00:02:37.785 CC lib/nvmf/rdma.o 00:02:37.785 CC lib/ftl/ftl_reloc.o 00:02:37.785 CC lib/ftl/ftl_rq.o 00:02:37.785 CC lib/ftl/ftl_l2p_cache.o 00:02:37.785 CC lib/nvmf/auth.o 00:02:37.785 CC lib/ftl/ftl_p2l.o 00:02:37.785 CC lib/ftl/ftl_p2l_log.o 00:02:37.785 CC lib/ftl/mngt/ftl_mngt.o 00:02:37.785 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:37.785 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:37.785 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:37.785 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:37.785 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:37.785 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:37.785 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:37.785 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:37.785 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:37.785 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:37.785 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:37.785 CC lib/ftl/utils/ftl_conf.o 00:02:37.785 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:37.785 CC lib/ftl/utils/ftl_md.o 00:02:37.785 CC lib/ftl/utils/ftl_mempool.o 00:02:37.785 CC lib/ftl/utils/ftl_bitmap.o 00:02:37.785 CC lib/ftl/utils/ftl_property.o 00:02:37.785 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:37.785 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:37.785 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:37.785 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:37.785 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:37.785 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:37.785 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:37.785 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:37.785 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:37.785 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:37.785 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:37.785 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:37.785 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:37.785 CC lib/ftl/base/ftl_base_dev.o 00:02:37.785 CC lib/ftl/ftl_trace.o 00:02:37.785 CC lib/ftl/base/ftl_base_bdev.o 00:02:38.351 LIB libspdk_nbd.a 00:02:38.351 SO libspdk_nbd.so.7.0 00:02:38.351 SYMLINK libspdk_nbd.so 00:02:38.351 LIB libspdk_scsi.a 00:02:38.351 LIB libspdk_ublk.a 00:02:38.351 SO libspdk_scsi.so.9.0 00:02:38.351 SO libspdk_ublk.so.3.0 00:02:38.609 SYMLINK libspdk_ublk.so 00:02:38.610 SYMLINK libspdk_scsi.so 00:02:38.868 LIB libspdk_ftl.a 00:02:38.868 CC lib/vhost/vhost.o 00:02:38.868 CC lib/vhost/vhost_rpc.o 00:02:38.868 CC lib/vhost/vhost_scsi.o 00:02:38.868 CC lib/vhost/vhost_blk.o 00:02:38.868 CC lib/vhost/rte_vhost_user.o 00:02:38.868 CC lib/iscsi/conn.o 00:02:38.868 CC lib/iscsi/init_grp.o 00:02:38.868 CC lib/iscsi/iscsi.o 00:02:38.868 CC lib/iscsi/param.o 00:02:38.868 CC lib/iscsi/portal_grp.o 00:02:38.868 CC lib/iscsi/tgt_node.o 00:02:38.868 CC lib/iscsi/iscsi_subsystem.o 00:02:38.868 CC lib/iscsi/iscsi_rpc.o 00:02:38.868 CC lib/iscsi/task.o 00:02:38.868 SO libspdk_ftl.so.9.0 00:02:39.127 SYMLINK libspdk_ftl.so 00:02:39.386 LIB libspdk_nvmf.a 00:02:39.386 SO libspdk_nvmf.so.20.0 00:02:39.646 SYMLINK libspdk_nvmf.so 00:02:39.646 LIB libspdk_vhost.a 00:02:39.646 SO libspdk_vhost.so.8.0 00:02:39.905 SYMLINK libspdk_vhost.so 00:02:39.905 LIB libspdk_iscsi.a 00:02:39.905 SO libspdk_iscsi.so.8.0 00:02:40.165 SYMLINK libspdk_iscsi.so 00:02:40.733 CC module/env_dpdk/env_dpdk_rpc.o 00:02:40.733 LIB libspdk_env_dpdk_rpc.a 00:02:40.733 CC module/accel/ioat/accel_ioat.o 00:02:40.733 CC module/accel/iaa/accel_iaa.o 00:02:40.733 CC module/accel/iaa/accel_iaa_rpc.o 00:02:40.733 CC module/accel/ioat/accel_ioat_rpc.o 00:02:40.733 CC module/blob/bdev/blob_bdev.o 00:02:40.733 CC module/accel/error/accel_error.o 00:02:40.733 CC module/accel/dsa/accel_dsa.o 00:02:40.733 CC module/accel/error/accel_error_rpc.o 00:02:40.733 CC module/accel/dsa/accel_dsa_rpc.o 00:02:40.733 SO libspdk_env_dpdk_rpc.so.6.0 00:02:40.733 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:40.733 CC module/sock/posix/posix.o 00:02:40.733 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:40.733 CC module/fsdev/aio/fsdev_aio.o 00:02:40.733 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:40.733 CC module/fsdev/aio/linux_aio_mgr.o 00:02:40.733 CC module/keyring/file/keyring.o 00:02:40.733 CC module/keyring/file/keyring_rpc.o 00:02:40.733 CC module/keyring/linux/keyring.o 00:02:40.733 CC module/keyring/linux/keyring_rpc.o 00:02:40.733 CC module/scheduler/gscheduler/gscheduler.o 00:02:40.733 SYMLINK libspdk_env_dpdk_rpc.so 00:02:40.992 LIB libspdk_scheduler_gscheduler.a 00:02:40.992 LIB libspdk_accel_ioat.a 00:02:40.992 LIB libspdk_keyring_linux.a 00:02:40.992 LIB libspdk_keyring_file.a 00:02:40.992 LIB libspdk_accel_error.a 00:02:40.992 LIB libspdk_scheduler_dpdk_governor.a 00:02:40.992 LIB libspdk_accel_iaa.a 00:02:40.992 LIB libspdk_scheduler_dynamic.a 00:02:40.992 SO libspdk_accel_error.so.2.0 00:02:40.992 SO libspdk_scheduler_gscheduler.so.4.0 00:02:40.992 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:40.992 SO libspdk_keyring_linux.so.1.0 00:02:40.992 SO libspdk_accel_ioat.so.6.0 00:02:40.992 SO libspdk_keyring_file.so.2.0 00:02:40.992 SO libspdk_accel_iaa.so.3.0 00:02:40.992 SO libspdk_scheduler_dynamic.so.4.0 00:02:40.992 LIB libspdk_blob_bdev.a 00:02:40.992 SYMLINK libspdk_accel_error.so 00:02:40.992 LIB libspdk_accel_dsa.a 00:02:40.992 SYMLINK libspdk_scheduler_gscheduler.so 00:02:40.992 SO libspdk_blob_bdev.so.12.0 00:02:40.992 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:40.992 SYMLINK libspdk_accel_iaa.so 00:02:40.992 SYMLINK libspdk_keyring_file.so 00:02:40.992 SYMLINK libspdk_keyring_linux.so 00:02:40.992 SYMLINK libspdk_accel_ioat.so 00:02:40.992 SYMLINK libspdk_scheduler_dynamic.so 00:02:40.992 SO libspdk_accel_dsa.so.5.0 00:02:41.251 SYMLINK libspdk_blob_bdev.so 00:02:41.251 SYMLINK libspdk_accel_dsa.so 00:02:41.251 LIB libspdk_fsdev_aio.a 00:02:41.251 SO libspdk_fsdev_aio.so.1.0 00:02:41.251 LIB libspdk_sock_posix.a 00:02:41.511 SO libspdk_sock_posix.so.6.0 00:02:41.511 SYMLINK libspdk_fsdev_aio.so 00:02:41.511 SYMLINK libspdk_sock_posix.so 00:02:41.769 CC module/bdev/gpt/gpt.o 00:02:41.769 CC module/bdev/gpt/vbdev_gpt.o 00:02:41.769 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:41.769 CC module/blobfs/bdev/blobfs_bdev.o 00:02:41.769 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:41.769 CC module/bdev/lvol/vbdev_lvol.o 00:02:41.769 CC module/bdev/error/vbdev_error.o 00:02:41.769 CC module/bdev/nvme/bdev_nvme.o 00:02:41.769 CC module/bdev/delay/vbdev_delay.o 00:02:41.769 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:41.769 CC module/bdev/error/vbdev_error_rpc.o 00:02:41.769 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:41.769 CC module/bdev/nvme/nvme_rpc.o 00:02:41.769 CC module/bdev/nvme/bdev_mdns_client.o 00:02:41.769 CC module/bdev/passthru/vbdev_passthru.o 00:02:41.769 CC module/bdev/nvme/vbdev_opal.o 00:02:41.769 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:41.769 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:41.769 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:41.769 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:41.769 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:41.769 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:41.769 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:41.769 CC module/bdev/split/vbdev_split.o 00:02:41.769 CC module/bdev/raid/bdev_raid.o 00:02:41.769 CC module/bdev/malloc/bdev_malloc.o 00:02:41.769 CC module/bdev/split/vbdev_split_rpc.o 00:02:41.769 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:41.769 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:41.769 CC module/bdev/raid/bdev_raid_sb.o 00:02:41.769 CC module/bdev/raid/bdev_raid_rpc.o 00:02:41.769 CC module/bdev/raid/raid0.o 00:02:41.769 CC module/bdev/null/bdev_null.o 00:02:41.769 CC module/bdev/raid/concat.o 00:02:41.769 CC module/bdev/raid/raid1.o 00:02:41.769 CC module/bdev/aio/bdev_aio.o 00:02:41.769 CC module/bdev/ftl/bdev_ftl.o 00:02:41.769 CC module/bdev/aio/bdev_aio_rpc.o 00:02:41.769 CC module/bdev/null/bdev_null_rpc.o 00:02:41.769 CC module/bdev/iscsi/bdev_iscsi.o 00:02:41.769 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:41.769 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:42.028 LIB libspdk_blobfs_bdev.a 00:02:42.028 SO libspdk_blobfs_bdev.so.6.0 00:02:42.028 LIB libspdk_bdev_split.a 00:02:42.028 LIB libspdk_bdev_gpt.a 00:02:42.028 LIB libspdk_bdev_error.a 00:02:42.028 SYMLINK libspdk_blobfs_bdev.so 00:02:42.028 SO libspdk_bdev_error.so.6.0 00:02:42.028 SO libspdk_bdev_gpt.so.6.0 00:02:42.028 SO libspdk_bdev_split.so.6.0 00:02:42.028 LIB libspdk_bdev_null.a 00:02:42.028 LIB libspdk_bdev_ftl.a 00:02:42.028 LIB libspdk_bdev_passthru.a 00:02:42.028 LIB libspdk_bdev_zone_block.a 00:02:42.028 SO libspdk_bdev_ftl.so.6.0 00:02:42.028 SO libspdk_bdev_null.so.6.0 00:02:42.028 LIB libspdk_bdev_aio.a 00:02:42.028 SO libspdk_bdev_passthru.so.6.0 00:02:42.028 SYMLINK libspdk_bdev_gpt.so 00:02:42.028 SYMLINK libspdk_bdev_split.so 00:02:42.028 SYMLINK libspdk_bdev_error.so 00:02:42.028 SO libspdk_bdev_zone_block.so.6.0 00:02:42.028 LIB libspdk_bdev_malloc.a 00:02:42.028 LIB libspdk_bdev_iscsi.a 00:02:42.028 SO libspdk_bdev_aio.so.6.0 00:02:42.028 LIB libspdk_bdev_delay.a 00:02:42.028 SYMLINK libspdk_bdev_ftl.so 00:02:42.028 SYMLINK libspdk_bdev_null.so 00:02:42.028 SO libspdk_bdev_malloc.so.6.0 00:02:42.028 SO libspdk_bdev_iscsi.so.6.0 00:02:42.028 SYMLINK libspdk_bdev_zone_block.so 00:02:42.028 SO libspdk_bdev_delay.so.6.0 00:02:42.028 SYMLINK libspdk_bdev_passthru.so 00:02:42.028 SYMLINK libspdk_bdev_aio.so 00:02:42.028 LIB libspdk_bdev_lvol.a 00:02:42.287 LIB libspdk_bdev_virtio.a 00:02:42.287 SYMLINK libspdk_bdev_iscsi.so 00:02:42.287 SO libspdk_bdev_lvol.so.6.0 00:02:42.287 SYMLINK libspdk_bdev_malloc.so 00:02:42.287 SYMLINK libspdk_bdev_delay.so 00:02:42.287 SO libspdk_bdev_virtio.so.6.0 00:02:42.287 SYMLINK libspdk_bdev_lvol.so 00:02:42.287 SYMLINK libspdk_bdev_virtio.so 00:02:42.546 LIB libspdk_bdev_raid.a 00:02:42.547 SO libspdk_bdev_raid.so.6.0 00:02:42.547 SYMLINK libspdk_bdev_raid.so 00:02:43.484 LIB libspdk_bdev_nvme.a 00:02:43.743 SO libspdk_bdev_nvme.so.7.1 00:02:43.743 SYMLINK libspdk_bdev_nvme.so 00:02:44.681 CC module/event/subsystems/vmd/vmd.o 00:02:44.681 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:44.681 CC module/event/subsystems/iobuf/iobuf.o 00:02:44.681 CC module/event/subsystems/sock/sock.o 00:02:44.681 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:44.681 CC module/event/subsystems/fsdev/fsdev.o 00:02:44.681 CC module/event/subsystems/keyring/keyring.o 00:02:44.681 CC module/event/subsystems/scheduler/scheduler.o 00:02:44.681 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:44.681 LIB libspdk_event_keyring.a 00:02:44.681 LIB libspdk_event_vmd.a 00:02:44.681 LIB libspdk_event_scheduler.a 00:02:44.681 LIB libspdk_event_vhost_blk.a 00:02:44.681 LIB libspdk_event_sock.a 00:02:44.681 LIB libspdk_event_iobuf.a 00:02:44.681 LIB libspdk_event_fsdev.a 00:02:44.681 SO libspdk_event_keyring.so.1.0 00:02:44.681 SO libspdk_event_iobuf.so.3.0 00:02:44.681 SO libspdk_event_vmd.so.6.0 00:02:44.681 SO libspdk_event_scheduler.so.4.0 00:02:44.681 SO libspdk_event_sock.so.5.0 00:02:44.681 SO libspdk_event_fsdev.so.1.0 00:02:44.681 SO libspdk_event_vhost_blk.so.3.0 00:02:44.681 SYMLINK libspdk_event_keyring.so 00:02:44.681 SYMLINK libspdk_event_iobuf.so 00:02:44.681 SYMLINK libspdk_event_vmd.so 00:02:44.681 SYMLINK libspdk_event_vhost_blk.so 00:02:44.681 SYMLINK libspdk_event_scheduler.so 00:02:44.681 SYMLINK libspdk_event_sock.so 00:02:44.681 SYMLINK libspdk_event_fsdev.so 00:02:45.250 CC module/event/subsystems/accel/accel.o 00:02:45.250 LIB libspdk_event_accel.a 00:02:45.250 SO libspdk_event_accel.so.6.0 00:02:45.250 SYMLINK libspdk_event_accel.so 00:02:45.817 CC module/event/subsystems/bdev/bdev.o 00:02:45.817 LIB libspdk_event_bdev.a 00:02:45.817 SO libspdk_event_bdev.so.6.0 00:02:46.076 SYMLINK libspdk_event_bdev.so 00:02:46.336 CC module/event/subsystems/scsi/scsi.o 00:02:46.336 CC module/event/subsystems/ublk/ublk.o 00:02:46.336 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:46.336 CC module/event/subsystems/nbd/nbd.o 00:02:46.336 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:46.594 LIB libspdk_event_ublk.a 00:02:46.594 LIB libspdk_event_scsi.a 00:02:46.594 LIB libspdk_event_nbd.a 00:02:46.594 SO libspdk_event_ublk.so.3.0 00:02:46.594 SO libspdk_event_scsi.so.6.0 00:02:46.594 SO libspdk_event_nbd.so.6.0 00:02:46.594 LIB libspdk_event_nvmf.a 00:02:46.594 SYMLINK libspdk_event_ublk.so 00:02:46.594 SYMLINK libspdk_event_scsi.so 00:02:46.594 SYMLINK libspdk_event_nbd.so 00:02:46.594 SO libspdk_event_nvmf.so.6.0 00:02:46.594 SYMLINK libspdk_event_nvmf.so 00:02:46.854 CC module/event/subsystems/iscsi/iscsi.o 00:02:46.854 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:47.114 LIB libspdk_event_vhost_scsi.a 00:02:47.114 LIB libspdk_event_iscsi.a 00:02:47.114 SO libspdk_event_vhost_scsi.so.3.0 00:02:47.114 SO libspdk_event_iscsi.so.6.0 00:02:47.114 SYMLINK libspdk_event_vhost_scsi.so 00:02:47.114 SYMLINK libspdk_event_iscsi.so 00:02:47.372 SO libspdk.so.6.0 00:02:47.372 SYMLINK libspdk.so 00:02:47.951 CC app/trace_record/trace_record.o 00:02:47.951 CC app/spdk_top/spdk_top.o 00:02:47.951 CXX app/trace/trace.o 00:02:47.951 CC app/spdk_nvme_identify/identify.o 00:02:47.951 CC app/spdk_nvme_discover/discovery_aer.o 00:02:47.951 CC app/spdk_nvme_perf/perf.o 00:02:47.951 CC app/spdk_lspci/spdk_lspci.o 00:02:47.951 CC test/rpc_client/rpc_client_test.o 00:02:47.951 TEST_HEADER include/spdk/accel.h 00:02:47.951 TEST_HEADER include/spdk/accel_module.h 00:02:47.951 TEST_HEADER include/spdk/assert.h 00:02:47.951 TEST_HEADER include/spdk/base64.h 00:02:47.951 TEST_HEADER include/spdk/barrier.h 00:02:47.951 TEST_HEADER include/spdk/bdev.h 00:02:47.951 TEST_HEADER include/spdk/bit_array.h 00:02:47.951 TEST_HEADER include/spdk/bdev_module.h 00:02:47.951 TEST_HEADER include/spdk/bdev_zone.h 00:02:47.951 TEST_HEADER include/spdk/bit_pool.h 00:02:47.951 TEST_HEADER include/spdk/blob_bdev.h 00:02:47.951 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:47.951 TEST_HEADER include/spdk/blobfs.h 00:02:47.951 TEST_HEADER include/spdk/config.h 00:02:47.951 TEST_HEADER include/spdk/blob.h 00:02:47.951 TEST_HEADER include/spdk/conf.h 00:02:47.951 TEST_HEADER include/spdk/crc32.h 00:02:47.951 TEST_HEADER include/spdk/cpuset.h 00:02:47.951 TEST_HEADER include/spdk/crc64.h 00:02:47.951 TEST_HEADER include/spdk/crc16.h 00:02:47.951 TEST_HEADER include/spdk/dma.h 00:02:47.951 TEST_HEADER include/spdk/endian.h 00:02:47.951 TEST_HEADER include/spdk/dif.h 00:02:47.951 TEST_HEADER include/spdk/env.h 00:02:47.951 CC app/iscsi_tgt/iscsi_tgt.o 00:02:47.951 TEST_HEADER include/spdk/env_dpdk.h 00:02:47.951 TEST_HEADER include/spdk/event.h 00:02:47.951 TEST_HEADER include/spdk/fd.h 00:02:47.951 TEST_HEADER include/spdk/fd_group.h 00:02:47.951 TEST_HEADER include/spdk/fsdev.h 00:02:47.951 TEST_HEADER include/spdk/fsdev_module.h 00:02:47.951 TEST_HEADER include/spdk/file.h 00:02:47.951 TEST_HEADER include/spdk/ftl.h 00:02:47.951 TEST_HEADER include/spdk/gpt_spec.h 00:02:47.951 TEST_HEADER include/spdk/hexlify.h 00:02:47.951 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:47.951 TEST_HEADER include/spdk/histogram_data.h 00:02:47.951 CC app/spdk_dd/spdk_dd.o 00:02:47.951 TEST_HEADER include/spdk/idxd.h 00:02:47.951 TEST_HEADER include/spdk/idxd_spec.h 00:02:47.951 TEST_HEADER include/spdk/init.h 00:02:47.951 TEST_HEADER include/spdk/ioat.h 00:02:47.951 TEST_HEADER include/spdk/ioat_spec.h 00:02:47.951 CC app/nvmf_tgt/nvmf_main.o 00:02:47.951 TEST_HEADER include/spdk/iscsi_spec.h 00:02:47.951 TEST_HEADER include/spdk/json.h 00:02:47.951 TEST_HEADER include/spdk/keyring.h 00:02:47.951 TEST_HEADER include/spdk/jsonrpc.h 00:02:47.951 TEST_HEADER include/spdk/keyring_module.h 00:02:47.951 TEST_HEADER include/spdk/lvol.h 00:02:47.951 TEST_HEADER include/spdk/likely.h 00:02:47.951 TEST_HEADER include/spdk/log.h 00:02:47.951 TEST_HEADER include/spdk/memory.h 00:02:47.951 TEST_HEADER include/spdk/md5.h 00:02:47.951 CC app/spdk_tgt/spdk_tgt.o 00:02:47.951 TEST_HEADER include/spdk/nbd.h 00:02:47.951 TEST_HEADER include/spdk/net.h 00:02:47.951 TEST_HEADER include/spdk/mmio.h 00:02:47.951 TEST_HEADER include/spdk/nvme_intel.h 00:02:47.951 TEST_HEADER include/spdk/notify.h 00:02:47.951 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:47.951 TEST_HEADER include/spdk/nvme.h 00:02:47.951 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:47.951 TEST_HEADER include/spdk/nvme_spec.h 00:02:47.951 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:47.951 TEST_HEADER include/spdk/nvme_zns.h 00:02:47.951 TEST_HEADER include/spdk/nvmf.h 00:02:47.951 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:47.951 TEST_HEADER include/spdk/nvmf_transport.h 00:02:47.951 TEST_HEADER include/spdk/nvmf_spec.h 00:02:47.951 TEST_HEADER include/spdk/opal.h 00:02:47.951 TEST_HEADER include/spdk/opal_spec.h 00:02:47.951 TEST_HEADER include/spdk/pci_ids.h 00:02:47.951 TEST_HEADER include/spdk/pipe.h 00:02:47.951 TEST_HEADER include/spdk/queue.h 00:02:47.952 TEST_HEADER include/spdk/rpc.h 00:02:47.952 TEST_HEADER include/spdk/reduce.h 00:02:47.952 TEST_HEADER include/spdk/scsi.h 00:02:47.952 TEST_HEADER include/spdk/scheduler.h 00:02:47.952 TEST_HEADER include/spdk/scsi_spec.h 00:02:47.952 TEST_HEADER include/spdk/sock.h 00:02:47.952 TEST_HEADER include/spdk/string.h 00:02:47.952 TEST_HEADER include/spdk/thread.h 00:02:47.952 TEST_HEADER include/spdk/trace.h 00:02:47.952 TEST_HEADER include/spdk/stdinc.h 00:02:47.952 TEST_HEADER include/spdk/trace_parser.h 00:02:47.952 TEST_HEADER include/spdk/util.h 00:02:47.952 TEST_HEADER include/spdk/tree.h 00:02:47.952 TEST_HEADER include/spdk/ublk.h 00:02:47.952 TEST_HEADER include/spdk/uuid.h 00:02:47.952 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:47.952 TEST_HEADER include/spdk/version.h 00:02:47.952 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:47.952 TEST_HEADER include/spdk/vmd.h 00:02:47.952 TEST_HEADER include/spdk/vhost.h 00:02:47.952 TEST_HEADER include/spdk/zipf.h 00:02:47.952 CXX test/cpp_headers/accel.o 00:02:47.952 TEST_HEADER include/spdk/xor.h 00:02:47.952 CXX test/cpp_headers/assert.o 00:02:47.952 CXX test/cpp_headers/accel_module.o 00:02:47.952 CXX test/cpp_headers/barrier.o 00:02:47.952 CXX test/cpp_headers/bdev.o 00:02:47.952 CXX test/cpp_headers/bdev_module.o 00:02:47.952 CXX test/cpp_headers/bit_array.o 00:02:47.952 CXX test/cpp_headers/base64.o 00:02:47.952 CXX test/cpp_headers/bdev_zone.o 00:02:47.952 CXX test/cpp_headers/blobfs_bdev.o 00:02:47.952 CXX test/cpp_headers/blob_bdev.o 00:02:47.952 CXX test/cpp_headers/bit_pool.o 00:02:47.952 CXX test/cpp_headers/blobfs.o 00:02:47.952 CXX test/cpp_headers/blob.o 00:02:47.952 CXX test/cpp_headers/conf.o 00:02:47.952 CXX test/cpp_headers/cpuset.o 00:02:47.952 CXX test/cpp_headers/config.o 00:02:47.952 CXX test/cpp_headers/crc16.o 00:02:47.952 CXX test/cpp_headers/crc64.o 00:02:47.952 CXX test/cpp_headers/dif.o 00:02:47.952 CXX test/cpp_headers/crc32.o 00:02:47.952 CXX test/cpp_headers/dma.o 00:02:47.952 CXX test/cpp_headers/endian.o 00:02:47.952 CXX test/cpp_headers/env_dpdk.o 00:02:47.952 CXX test/cpp_headers/env.o 00:02:47.952 CXX test/cpp_headers/event.o 00:02:47.952 CXX test/cpp_headers/fd_group.o 00:02:47.952 CXX test/cpp_headers/file.o 00:02:47.952 CXX test/cpp_headers/fd.o 00:02:47.952 CXX test/cpp_headers/fsdev_module.o 00:02:47.952 CXX test/cpp_headers/fsdev.o 00:02:47.952 CXX test/cpp_headers/gpt_spec.o 00:02:47.952 CXX test/cpp_headers/ftl.o 00:02:47.952 CXX test/cpp_headers/hexlify.o 00:02:47.952 CXX test/cpp_headers/histogram_data.o 00:02:47.952 CXX test/cpp_headers/idxd.o 00:02:47.952 CXX test/cpp_headers/idxd_spec.o 00:02:47.952 CXX test/cpp_headers/ioat.o 00:02:47.952 CXX test/cpp_headers/init.o 00:02:47.952 CXX test/cpp_headers/ioat_spec.o 00:02:47.952 CXX test/cpp_headers/iscsi_spec.o 00:02:47.952 CXX test/cpp_headers/keyring.o 00:02:47.952 CXX test/cpp_headers/jsonrpc.o 00:02:47.952 CXX test/cpp_headers/json.o 00:02:47.952 CXX test/cpp_headers/log.o 00:02:47.952 CXX test/cpp_headers/keyring_module.o 00:02:47.952 CXX test/cpp_headers/likely.o 00:02:47.952 CXX test/cpp_headers/lvol.o 00:02:47.952 CXX test/cpp_headers/md5.o 00:02:47.952 CXX test/cpp_headers/nbd.o 00:02:47.952 CXX test/cpp_headers/memory.o 00:02:47.952 CXX test/cpp_headers/mmio.o 00:02:47.952 CXX test/cpp_headers/notify.o 00:02:47.952 CXX test/cpp_headers/net.o 00:02:47.952 CXX test/cpp_headers/nvme.o 00:02:47.952 CXX test/cpp_headers/nvme_intel.o 00:02:47.952 CXX test/cpp_headers/nvme_ocssd.o 00:02:47.952 CXX test/cpp_headers/nvme_zns.o 00:02:47.952 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:47.952 CXX test/cpp_headers/nvme_spec.o 00:02:47.952 CXX test/cpp_headers/nvmf_cmd.o 00:02:47.952 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:47.952 CXX test/cpp_headers/nvmf_spec.o 00:02:47.952 CXX test/cpp_headers/nvmf.o 00:02:47.952 CXX test/cpp_headers/nvmf_transport.o 00:02:47.952 CXX test/cpp_headers/opal.o 00:02:47.952 CXX test/cpp_headers/opal_spec.o 00:02:47.952 CXX test/cpp_headers/pci_ids.o 00:02:47.952 CXX test/cpp_headers/pipe.o 00:02:47.952 CXX test/cpp_headers/queue.o 00:02:47.952 CXX test/cpp_headers/reduce.o 00:02:47.952 CXX test/cpp_headers/rpc.o 00:02:47.952 CXX test/cpp_headers/scheduler.o 00:02:47.952 CXX test/cpp_headers/scsi_spec.o 00:02:47.952 CXX test/cpp_headers/scsi.o 00:02:47.952 CXX test/cpp_headers/sock.o 00:02:47.952 CC examples/ioat/perf/perf.o 00:02:47.952 CXX test/cpp_headers/stdinc.o 00:02:47.952 CXX test/cpp_headers/string.o 00:02:47.952 CXX test/cpp_headers/thread.o 00:02:47.952 CXX test/cpp_headers/trace.o 00:02:47.952 CXX test/cpp_headers/trace_parser.o 00:02:47.952 CC test/app/jsoncat/jsoncat.o 00:02:47.952 CXX test/cpp_headers/tree.o 00:02:47.952 CXX test/cpp_headers/ublk.o 00:02:47.952 CC examples/util/zipf/zipf.o 00:02:47.952 CC app/fio/nvme/fio_plugin.o 00:02:47.952 CC test/app/stub/stub.o 00:02:48.221 CC test/app/histogram_perf/histogram_perf.o 00:02:48.221 CC test/env/memory/memory_ut.o 00:02:48.221 CC test/thread/poller_perf/poller_perf.o 00:02:48.221 CC examples/ioat/verify/verify.o 00:02:48.221 CC test/env/pci/pci_ut.o 00:02:48.221 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:48.221 CC test/app/bdev_svc/bdev_svc.o 00:02:48.221 CC test/env/vtophys/vtophys.o 00:02:48.221 CC test/dma/test_dma/test_dma.o 00:02:48.221 CXX test/cpp_headers/util.o 00:02:48.221 CC app/fio/bdev/fio_plugin.o 00:02:48.221 LINK spdk_lspci 00:02:48.492 LINK spdk_nvme_discover 00:02:48.492 LINK rpc_client_test 00:02:48.492 LINK iscsi_tgt 00:02:48.492 LINK interrupt_tgt 00:02:48.492 LINK nvmf_tgt 00:02:48.492 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:48.492 CC test/env/mem_callbacks/mem_callbacks.o 00:02:48.758 LINK spdk_trace_record 00:02:48.758 LINK spdk_tgt 00:02:48.758 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:48.758 LINK zipf 00:02:48.758 LINK jsoncat 00:02:48.758 LINK poller_perf 00:02:48.758 CXX test/cpp_headers/uuid.o 00:02:48.758 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:48.758 CXX test/cpp_headers/version.o 00:02:48.758 CXX test/cpp_headers/vfio_user_pci.o 00:02:48.758 CXX test/cpp_headers/vfio_user_spec.o 00:02:48.758 CXX test/cpp_headers/vhost.o 00:02:48.758 LINK histogram_perf 00:02:48.758 CXX test/cpp_headers/vmd.o 00:02:48.758 LINK stub 00:02:48.758 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:48.758 CXX test/cpp_headers/xor.o 00:02:48.758 CXX test/cpp_headers/zipf.o 00:02:48.758 LINK vtophys 00:02:48.758 LINK bdev_svc 00:02:48.758 LINK env_dpdk_post_init 00:02:48.758 LINK ioat_perf 00:02:48.758 LINK verify 00:02:49.190 LINK spdk_trace 00:02:49.190 LINK spdk_dd 00:02:49.190 LINK pci_ut 00:02:49.191 LINK nvme_fuzz 00:02:49.191 LINK test_dma 00:02:49.191 LINK spdk_nvme 00:02:49.191 LINK spdk_bdev 00:02:49.191 LINK spdk_nvme_identify 00:02:49.191 LINK vhost_fuzz 00:02:49.191 LINK spdk_top 00:02:49.191 CC test/event/reactor_perf/reactor_perf.o 00:02:49.191 CC test/event/reactor/reactor.o 00:02:49.191 CC test/event/event_perf/event_perf.o 00:02:49.191 CC test/event/app_repeat/app_repeat.o 00:02:49.191 LINK mem_callbacks 00:02:49.191 LINK spdk_nvme_perf 00:02:49.191 CC test/event/scheduler/scheduler.o 00:02:49.191 CC app/vhost/vhost.o 00:02:49.191 CC examples/idxd/perf/perf.o 00:02:49.191 CC examples/vmd/led/led.o 00:02:49.191 CC examples/vmd/lsvmd/lsvmd.o 00:02:49.191 CC examples/sock/hello_world/hello_sock.o 00:02:49.450 CC examples/thread/thread/thread_ex.o 00:02:49.450 LINK reactor 00:02:49.450 LINK reactor_perf 00:02:49.450 LINK event_perf 00:02:49.450 LINK lsvmd 00:02:49.450 LINK app_repeat 00:02:49.450 LINK led 00:02:49.450 LINK vhost 00:02:49.450 LINK scheduler 00:02:49.450 LINK hello_sock 00:02:49.708 LINK idxd_perf 00:02:49.708 LINK thread 00:02:49.708 LINK memory_ut 00:02:49.708 CC test/nvme/simple_copy/simple_copy.o 00:02:49.708 CC test/nvme/connect_stress/connect_stress.o 00:02:49.708 CC test/nvme/sgl/sgl.o 00:02:49.708 CC test/nvme/boot_partition/boot_partition.o 00:02:49.708 CC test/nvme/reset/reset.o 00:02:49.708 CC test/nvme/overhead/overhead.o 00:02:49.708 CC test/nvme/fdp/fdp.o 00:02:49.708 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:49.708 CC test/nvme/reserve/reserve.o 00:02:49.708 CC test/nvme/err_injection/err_injection.o 00:02:49.708 CC test/nvme/startup/startup.o 00:02:49.708 CC test/nvme/aer/aer.o 00:02:49.708 CC test/nvme/e2edp/nvme_dp.o 00:02:49.708 CC test/nvme/cuse/cuse.o 00:02:49.708 CC test/nvme/fused_ordering/fused_ordering.o 00:02:49.708 CC test/blobfs/mkfs/mkfs.o 00:02:49.708 CC test/nvme/compliance/nvme_compliance.o 00:02:49.708 CC test/accel/dif/dif.o 00:02:49.708 CC test/lvol/esnap/esnap.o 00:02:49.708 LINK boot_partition 00:02:49.708 LINK startup 00:02:49.708 LINK err_injection 00:02:49.967 LINK connect_stress 00:02:49.967 LINK doorbell_aers 00:02:49.967 LINK reserve 00:02:49.967 LINK simple_copy 00:02:49.967 LINK mkfs 00:02:49.967 LINK fused_ordering 00:02:49.967 LINK reset 00:02:49.967 LINK sgl 00:02:49.967 LINK nvme_dp 00:02:49.967 LINK overhead 00:02:49.967 LINK aer 00:02:49.967 LINK nvme_compliance 00:02:49.967 LINK fdp 00:02:49.967 CC examples/nvme/hotplug/hotplug.o 00:02:49.967 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:49.967 CC examples/nvme/hello_world/hello_world.o 00:02:49.967 CC examples/nvme/arbitration/arbitration.o 00:02:49.967 CC examples/nvme/reconnect/reconnect.o 00:02:49.967 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:49.967 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:49.967 CC examples/nvme/abort/abort.o 00:02:50.226 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:50.226 CC examples/accel/perf/accel_perf.o 00:02:50.226 CC examples/blob/hello_world/hello_blob.o 00:02:50.226 CC examples/blob/cli/blobcli.o 00:02:50.226 LINK iscsi_fuzz 00:02:50.226 LINK pmr_persistence 00:02:50.226 LINK cmb_copy 00:02:50.226 LINK hotplug 00:02:50.226 LINK hello_world 00:02:50.226 LINK dif 00:02:50.226 LINK arbitration 00:02:50.226 LINK reconnect 00:02:50.226 LINK abort 00:02:50.484 LINK hello_blob 00:02:50.484 LINK hello_fsdev 00:02:50.484 LINK nvme_manage 00:02:50.484 LINK accel_perf 00:02:50.484 LINK blobcli 00:02:50.743 LINK cuse 00:02:50.743 CC test/bdev/bdevio/bdevio.o 00:02:51.002 CC examples/bdev/hello_world/hello_bdev.o 00:02:51.002 CC examples/bdev/bdevperf/bdevperf.o 00:02:51.261 LINK bdevio 00:02:51.261 LINK hello_bdev 00:02:51.829 LINK bdevperf 00:02:52.397 CC examples/nvmf/nvmf/nvmf.o 00:02:52.655 LINK nvmf 00:02:53.592 LINK esnap 00:02:53.852 00:02:53.852 real 0m55.806s 00:02:53.852 user 7m43.612s 00:02:53.852 sys 4m11.199s 00:02:53.852 17:51:01 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:53.852 17:51:01 make -- common/autotest_common.sh@10 -- $ set +x 00:02:53.852 ************************************ 00:02:53.852 END TEST make 00:02:53.852 ************************************ 00:02:53.852 17:51:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:53.852 17:51:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:53.852 17:51:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:53.852 17:51:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.852 17:51:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:53.852 17:51:01 -- pm/common@44 -- $ pid=2077777 00:02:53.852 17:51:01 -- pm/common@50 -- $ kill -TERM 2077777 00:02:53.852 17:51:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.852 17:51:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:53.852 17:51:01 -- pm/common@44 -- $ pid=2077779 00:02:53.852 17:51:01 -- pm/common@50 -- $ kill -TERM 2077779 00:02:53.852 17:51:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.852 17:51:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:53.852 17:51:01 -- pm/common@44 -- $ pid=2077781 00:02:53.852 17:51:01 -- pm/common@50 -- $ kill -TERM 2077781 00:02:53.852 17:51:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.852 17:51:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:53.852 17:51:01 -- pm/common@44 -- $ pid=2077804 00:02:53.852 17:51:01 -- pm/common@50 -- $ sudo -E kill -TERM 2077804 00:02:53.852 17:51:01 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:53.852 17:51:01 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:53.852 17:51:01 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:53.852 17:51:01 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:53.852 17:51:01 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:54.112 17:51:01 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:54.112 17:51:01 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:54.112 17:51:01 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:54.112 17:51:01 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:54.112 17:51:01 -- scripts/common.sh@336 -- # IFS=.-: 00:02:54.112 17:51:01 -- scripts/common.sh@336 -- # read -ra ver1 00:02:54.112 17:51:01 -- scripts/common.sh@337 -- # IFS=.-: 00:02:54.112 17:51:01 -- scripts/common.sh@337 -- # read -ra ver2 00:02:54.112 17:51:01 -- scripts/common.sh@338 -- # local 'op=<' 00:02:54.112 17:51:01 -- scripts/common.sh@340 -- # ver1_l=2 00:02:54.112 17:51:01 -- scripts/common.sh@341 -- # ver2_l=1 00:02:54.112 17:51:01 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:54.112 17:51:01 -- scripts/common.sh@344 -- # case "$op" in 00:02:54.113 17:51:01 -- scripts/common.sh@345 -- # : 1 00:02:54.113 17:51:01 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:54.113 17:51:01 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:54.113 17:51:01 -- scripts/common.sh@365 -- # decimal 1 00:02:54.113 17:51:01 -- scripts/common.sh@353 -- # local d=1 00:02:54.113 17:51:01 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:54.113 17:51:01 -- scripts/common.sh@355 -- # echo 1 00:02:54.113 17:51:01 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:54.113 17:51:01 -- scripts/common.sh@366 -- # decimal 2 00:02:54.113 17:51:01 -- scripts/common.sh@353 -- # local d=2 00:02:54.113 17:51:01 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:54.113 17:51:01 -- scripts/common.sh@355 -- # echo 2 00:02:54.113 17:51:01 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:54.113 17:51:01 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:54.113 17:51:01 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:54.113 17:51:01 -- scripts/common.sh@368 -- # return 0 00:02:54.113 17:51:01 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:54.113 17:51:01 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:54.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.113 --rc genhtml_branch_coverage=1 00:02:54.113 --rc genhtml_function_coverage=1 00:02:54.113 --rc genhtml_legend=1 00:02:54.113 --rc geninfo_all_blocks=1 00:02:54.113 --rc geninfo_unexecuted_blocks=1 00:02:54.113 00:02:54.113 ' 00:02:54.113 17:51:01 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:54.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.113 --rc genhtml_branch_coverage=1 00:02:54.113 --rc genhtml_function_coverage=1 00:02:54.113 --rc genhtml_legend=1 00:02:54.113 --rc geninfo_all_blocks=1 00:02:54.113 --rc geninfo_unexecuted_blocks=1 00:02:54.113 00:02:54.113 ' 00:02:54.113 17:51:01 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:54.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.113 --rc genhtml_branch_coverage=1 00:02:54.113 --rc genhtml_function_coverage=1 00:02:54.113 --rc genhtml_legend=1 00:02:54.113 --rc geninfo_all_blocks=1 00:02:54.113 --rc geninfo_unexecuted_blocks=1 00:02:54.113 00:02:54.113 ' 00:02:54.113 17:51:01 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:54.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.113 --rc genhtml_branch_coverage=1 00:02:54.113 --rc genhtml_function_coverage=1 00:02:54.113 --rc genhtml_legend=1 00:02:54.113 --rc geninfo_all_blocks=1 00:02:54.113 --rc geninfo_unexecuted_blocks=1 00:02:54.113 00:02:54.113 ' 00:02:54.113 17:51:01 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:54.113 17:51:01 -- nvmf/common.sh@7 -- # uname -s 00:02:54.113 17:51:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:54.113 17:51:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:54.113 17:51:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:54.113 17:51:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:54.113 17:51:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:54.113 17:51:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:54.113 17:51:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:54.113 17:51:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:54.113 17:51:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:54.113 17:51:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:54.113 17:51:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:02:54.113 17:51:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:02:54.113 17:51:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:54.113 17:51:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:54.113 17:51:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:54.113 17:51:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:54.113 17:51:01 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:54.113 17:51:01 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:54.113 17:51:01 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:54.113 17:51:01 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:54.113 17:51:01 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:54.113 17:51:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.113 17:51:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.113 17:51:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.113 17:51:01 -- paths/export.sh@5 -- # export PATH 00:02:54.113 17:51:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.113 17:51:01 -- nvmf/common.sh@51 -- # : 0 00:02:54.113 17:51:01 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:54.113 17:51:01 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:54.113 17:51:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:54.113 17:51:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:54.113 17:51:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:54.113 17:51:01 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:54.113 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:54.113 17:51:01 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:54.113 17:51:01 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:54.113 17:51:01 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:54.113 17:51:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:54.113 17:51:01 -- spdk/autotest.sh@32 -- # uname -s 00:02:54.113 17:51:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:54.113 17:51:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:54.113 17:51:01 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:54.113 17:51:01 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:54.113 17:51:01 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:54.113 17:51:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:54.113 17:51:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:54.113 17:51:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:54.113 17:51:01 -- spdk/autotest.sh@48 -- # udevadm_pid=2140781 00:02:54.113 17:51:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:54.113 17:51:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:54.113 17:51:01 -- pm/common@17 -- # local monitor 00:02:54.113 17:51:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.113 17:51:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.113 17:51:01 -- pm/common@21 -- # date +%s 00:02:54.113 17:51:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.113 17:51:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.113 17:51:01 -- pm/common@21 -- # date +%s 00:02:54.113 17:51:01 -- pm/common@25 -- # sleep 1 00:02:54.113 17:51:01 -- pm/common@21 -- # date +%s 00:02:54.113 17:51:01 -- pm/common@21 -- # date +%s 00:02:54.113 17:51:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733763061 00:02:54.113 17:51:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733763061 00:02:54.113 17:51:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733763061 00:02:54.113 17:51:01 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733763061 00:02:54.113 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733763061_collect-cpu-load.pm.log 00:02:54.113 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733763061_collect-vmstat.pm.log 00:02:54.113 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733763061_collect-cpu-temp.pm.log 00:02:54.113 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733763061_collect-bmc-pm.bmc.pm.log 00:02:55.052 17:51:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:55.052 17:51:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:55.052 17:51:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:55.052 17:51:02 -- common/autotest_common.sh@10 -- # set +x 00:02:55.052 17:51:02 -- spdk/autotest.sh@59 -- # create_test_list 00:02:55.052 17:51:02 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:55.052 17:51:02 -- common/autotest_common.sh@10 -- # set +x 00:02:55.052 17:51:02 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:55.052 17:51:02 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:55.052 17:51:02 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:55.052 17:51:02 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:55.052 17:51:02 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:55.052 17:51:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:55.052 17:51:02 -- common/autotest_common.sh@1457 -- # uname 00:02:55.052 17:51:03 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:55.052 17:51:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:55.052 17:51:03 -- common/autotest_common.sh@1477 -- # uname 00:02:55.052 17:51:03 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:55.052 17:51:03 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:55.052 17:51:03 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:55.311 lcov: LCOV version 1.15 00:02:55.311 17:51:03 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:07.528 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:07.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:22.410 17:51:28 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:22.410 17:51:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:22.410 17:51:28 -- common/autotest_common.sh@10 -- # set +x 00:03:22.410 17:51:28 -- spdk/autotest.sh@78 -- # rm -f 00:03:22.410 17:51:28 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:23.787 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:23.787 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:23.787 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:23.787 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:23.787 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:23.787 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:23.787 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:23.787 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:23.787 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:23.787 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:23.787 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:23.787 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:24.047 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:24.047 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:24.047 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:24.047 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:24.047 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:24.047 17:51:31 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:24.047 17:51:31 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:24.047 17:51:31 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:24.047 17:51:31 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:24.047 17:51:31 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:24.047 17:51:31 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:24.047 17:51:31 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:24.047 17:51:31 -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:03:24.047 17:51:31 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:24.047 17:51:31 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:24.047 17:51:31 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:24.047 17:51:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:24.047 17:51:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:24.047 17:51:31 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:24.047 17:51:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:24.047 17:51:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:24.047 17:51:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:24.047 17:51:31 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:24.047 17:51:31 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:24.047 No valid GPT data, bailing 00:03:24.047 17:51:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:24.047 17:51:31 -- scripts/common.sh@394 -- # pt= 00:03:24.047 17:51:31 -- scripts/common.sh@395 -- # return 1 00:03:24.047 17:51:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:24.047 1+0 records in 00:03:24.047 1+0 records out 00:03:24.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00464611 s, 226 MB/s 00:03:24.047 17:51:32 -- spdk/autotest.sh@105 -- # sync 00:03:24.047 17:51:32 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:24.047 17:51:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:24.047 17:51:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:32.169 17:51:39 -- spdk/autotest.sh@111 -- # uname -s 00:03:32.169 17:51:39 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:32.169 17:51:39 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:32.169 17:51:39 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:35.459 Hugepages 00:03:35.459 node hugesize free / total 00:03:35.459 node0 1048576kB 0 / 0 00:03:35.459 node0 2048kB 0 / 0 00:03:35.459 node1 1048576kB 0 / 0 00:03:35.459 node1 2048kB 0 / 0 00:03:35.459 00:03:35.459 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:35.459 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:35.459 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:35.459 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:35.459 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:35.459 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:35.459 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:35.459 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:35.459 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:35.459 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:35.459 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:35.459 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:35.459 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:35.459 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:35.459 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:35.459 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:35.459 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:35.459 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:35.459 17:51:43 -- spdk/autotest.sh@117 -- # uname -s 00:03:35.459 17:51:43 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:35.459 17:51:43 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:35.459 17:51:43 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:38.747 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:38.747 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:38.747 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:38.747 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:38.747 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:38.747 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:38.747 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:38.747 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:38.747 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:38.747 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:38.747 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:38.747 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:38.747 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:38.747 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:39.006 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:39.006 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:40.913 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:40.913 17:51:48 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:42.294 17:51:49 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:42.294 17:51:49 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:42.294 17:51:49 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:42.294 17:51:49 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:42.294 17:51:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:42.294 17:51:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:42.294 17:51:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:42.294 17:51:49 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:42.294 17:51:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:42.294 17:51:49 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:42.294 17:51:49 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:03:42.294 17:51:49 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.584 Waiting for block devices as requested 00:03:45.584 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:45.584 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:45.584 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:45.843 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:45.843 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:45.843 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:46.102 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:46.102 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:46.102 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:46.362 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:46.362 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:46.362 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:46.621 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:46.621 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:46.621 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:46.880 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:46.880 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:03:47.139 17:51:54 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:47.139 17:51:54 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:03:47.139 17:51:54 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:47.139 17:51:54 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:03:47.139 17:51:54 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:03:47.139 17:51:54 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:03:47.139 17:51:54 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:03:47.139 17:51:54 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:47.139 17:51:54 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:47.139 17:51:54 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:47.139 17:51:54 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:47.139 17:51:54 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:47.139 17:51:54 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:47.139 17:51:54 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:47.139 17:51:54 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:47.139 17:51:54 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:47.139 17:51:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:47.139 17:51:54 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:47.139 17:51:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:47.139 17:51:55 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:47.139 17:51:55 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:47.139 17:51:55 -- common/autotest_common.sh@1543 -- # continue 00:03:47.139 17:51:55 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:47.139 17:51:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:47.139 17:51:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.139 17:51:55 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:47.139 17:51:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.139 17:51:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.139 17:51:55 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:51.444 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:51.444 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:51.444 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:51.444 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:51.444 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:51.444 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:51.444 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:51.444 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:51.444 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:51.444 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:51.444 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:51.444 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:51.444 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:51.444 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:51.444 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:51.444 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:52.826 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:52.826 17:52:00 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:52.826 17:52:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:52.826 17:52:00 -- common/autotest_common.sh@10 -- # set +x 00:03:52.826 17:52:00 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:52.826 17:52:00 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:52.826 17:52:00 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:52.826 17:52:00 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:52.826 17:52:00 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:52.826 17:52:00 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:52.826 17:52:00 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:52.826 17:52:00 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:52.826 17:52:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:52.826 17:52:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:52.826 17:52:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:52.826 17:52:00 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:52.826 17:52:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:53.085 17:52:00 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:53.085 17:52:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:03:53.085 17:52:00 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:53.085 17:52:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:03:53.085 17:52:00 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:53.085 17:52:00 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:53.085 17:52:00 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:53.085 17:52:00 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:53.085 17:52:00 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:03:53.085 17:52:00 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:03:53.085 17:52:00 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2157170 00:03:53.085 17:52:00 -- common/autotest_common.sh@1585 -- # waitforlisten 2157170 00:03:53.085 17:52:00 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.085 17:52:00 -- common/autotest_common.sh@835 -- # '[' -z 2157170 ']' 00:03:53.085 17:52:00 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:53.085 17:52:00 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:53.085 17:52:00 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:53.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:53.085 17:52:00 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:53.085 17:52:00 -- common/autotest_common.sh@10 -- # set +x 00:03:53.085 [2024-12-09 17:52:00.874247] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:03:53.085 [2024-12-09 17:52:00.874310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2157170 ] 00:03:53.085 [2024-12-09 17:52:00.968290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.085 [2024-12-09 17:52:01.008578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.021 17:52:01 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:54.021 17:52:01 -- common/autotest_common.sh@868 -- # return 0 00:03:54.021 17:52:01 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:54.021 17:52:01 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:54.021 17:52:01 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:03:57.309 nvme0n1 00:03:57.309 17:52:04 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:57.309 [2024-12-09 17:52:04.901495] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:57.309 request: 00:03:57.309 { 00:03:57.309 "nvme_ctrlr_name": "nvme0", 00:03:57.309 "password": "test", 00:03:57.309 "method": "bdev_nvme_opal_revert", 00:03:57.309 "req_id": 1 00:03:57.309 } 00:03:57.309 Got JSON-RPC error response 00:03:57.309 response: 00:03:57.309 { 00:03:57.309 "code": -32602, 00:03:57.309 "message": "Invalid parameters" 00:03:57.309 } 00:03:57.309 17:52:04 -- common/autotest_common.sh@1591 -- # true 00:03:57.309 17:52:04 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:57.310 17:52:04 -- common/autotest_common.sh@1595 -- # killprocess 2157170 00:03:57.310 17:52:04 -- common/autotest_common.sh@954 -- # '[' -z 2157170 ']' 00:03:57.310 17:52:04 -- common/autotest_common.sh@958 -- # kill -0 2157170 00:03:57.310 17:52:04 -- common/autotest_common.sh@959 -- # uname 00:03:57.310 17:52:04 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:57.310 17:52:04 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2157170 00:03:57.310 17:52:04 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:57.310 17:52:04 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:57.310 17:52:04 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2157170' 00:03:57.310 killing process with pid 2157170 00:03:57.310 17:52:04 -- common/autotest_common.sh@973 -- # kill 2157170 00:03:57.310 17:52:04 -- common/autotest_common.sh@978 -- # wait 2157170 00:03:59.844 17:52:07 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:59.844 17:52:07 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:59.844 17:52:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:59.844 17:52:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:59.844 17:52:07 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:59.844 17:52:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:59.844 17:52:07 -- common/autotest_common.sh@10 -- # set +x 00:03:59.844 17:52:07 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:59.844 17:52:07 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:59.844 17:52:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.844 17:52:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.844 17:52:07 -- common/autotest_common.sh@10 -- # set +x 00:03:59.844 ************************************ 00:03:59.844 START TEST env 00:03:59.844 ************************************ 00:03:59.844 17:52:07 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:59.844 * Looking for test storage... 00:03:59.844 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:03:59.844 17:52:07 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:59.844 17:52:07 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:59.844 17:52:07 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:59.844 17:52:07 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:59.844 17:52:07 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.844 17:52:07 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.844 17:52:07 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.844 17:52:07 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.844 17:52:07 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.844 17:52:07 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.844 17:52:07 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.844 17:52:07 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.844 17:52:07 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.844 17:52:07 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.844 17:52:07 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.844 17:52:07 env -- scripts/common.sh@344 -- # case "$op" in 00:03:59.844 17:52:07 env -- scripts/common.sh@345 -- # : 1 00:03:59.844 17:52:07 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.844 17:52:07 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.844 17:52:07 env -- scripts/common.sh@365 -- # decimal 1 00:03:59.844 17:52:07 env -- scripts/common.sh@353 -- # local d=1 00:03:59.844 17:52:07 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.844 17:52:07 env -- scripts/common.sh@355 -- # echo 1 00:03:59.844 17:52:07 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.844 17:52:07 env -- scripts/common.sh@366 -- # decimal 2 00:03:59.844 17:52:07 env -- scripts/common.sh@353 -- # local d=2 00:03:59.844 17:52:07 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.844 17:52:07 env -- scripts/common.sh@355 -- # echo 2 00:03:59.844 17:52:07 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.844 17:52:07 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.844 17:52:07 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.844 17:52:07 env -- scripts/common.sh@368 -- # return 0 00:03:59.844 17:52:07 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.844 17:52:07 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:59.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.844 --rc genhtml_branch_coverage=1 00:03:59.844 --rc genhtml_function_coverage=1 00:03:59.844 --rc genhtml_legend=1 00:03:59.844 --rc geninfo_all_blocks=1 00:03:59.844 --rc geninfo_unexecuted_blocks=1 00:03:59.844 00:03:59.844 ' 00:03:59.844 17:52:07 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:59.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.844 --rc genhtml_branch_coverage=1 00:03:59.844 --rc genhtml_function_coverage=1 00:03:59.844 --rc genhtml_legend=1 00:03:59.844 --rc geninfo_all_blocks=1 00:03:59.844 --rc geninfo_unexecuted_blocks=1 00:03:59.844 00:03:59.844 ' 00:03:59.844 17:52:07 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:59.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.844 --rc genhtml_branch_coverage=1 00:03:59.844 --rc genhtml_function_coverage=1 00:03:59.845 --rc genhtml_legend=1 00:03:59.845 --rc geninfo_all_blocks=1 00:03:59.845 --rc geninfo_unexecuted_blocks=1 00:03:59.845 00:03:59.845 ' 00:03:59.845 17:52:07 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:59.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.845 --rc genhtml_branch_coverage=1 00:03:59.845 --rc genhtml_function_coverage=1 00:03:59.845 --rc genhtml_legend=1 00:03:59.845 --rc geninfo_all_blocks=1 00:03:59.845 --rc geninfo_unexecuted_blocks=1 00:03:59.845 00:03:59.845 ' 00:03:59.845 17:52:07 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:59.845 17:52:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.845 17:52:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.845 17:52:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.105 ************************************ 00:04:00.105 START TEST env_memory 00:04:00.105 ************************************ 00:04:00.105 17:52:07 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:00.105 00:04:00.105 00:04:00.105 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.105 http://cunit.sourceforge.net/ 00:04:00.105 00:04:00.105 00:04:00.105 Suite: memory 00:04:00.105 Test: alloc and free memory map ...[2024-12-09 17:52:07.900546] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:00.105 passed 00:04:00.105 Test: mem map translation ...[2024-12-09 17:52:07.919693] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:00.105 [2024-12-09 17:52:07.919708] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:00.105 [2024-12-09 17:52:07.919744] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:00.105 [2024-12-09 17:52:07.919753] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:00.105 passed 00:04:00.105 Test: mem map registration ...[2024-12-09 17:52:07.954890] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:00.105 [2024-12-09 17:52:07.954904] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:00.105 passed 00:04:00.105 Test: mem map adjacent registrations ...passed 00:04:00.105 00:04:00.105 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.105 suites 1 1 n/a 0 0 00:04:00.105 tests 4 4 4 0 0 00:04:00.105 asserts 152 152 152 0 n/a 00:04:00.105 00:04:00.105 Elapsed time = 0.134 seconds 00:04:00.105 00:04:00.105 real 0m0.147s 00:04:00.105 user 0m0.134s 00:04:00.105 sys 0m0.013s 00:04:00.105 17:52:08 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.105 17:52:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:00.105 ************************************ 00:04:00.105 END TEST env_memory 00:04:00.105 ************************************ 00:04:00.105 17:52:08 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:00.105 17:52:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.105 17:52:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.105 17:52:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.105 ************************************ 00:04:00.105 START TEST env_vtophys 00:04:00.105 ************************************ 00:04:00.105 17:52:08 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:00.364 EAL: lib.eal log level changed from notice to debug 00:04:00.364 EAL: Detected lcore 0 as core 0 on socket 0 00:04:00.364 EAL: Detected lcore 1 as core 1 on socket 0 00:04:00.364 EAL: Detected lcore 2 as core 2 on socket 0 00:04:00.364 EAL: Detected lcore 3 as core 3 on socket 0 00:04:00.364 EAL: Detected lcore 4 as core 4 on socket 0 00:04:00.364 EAL: Detected lcore 5 as core 5 on socket 0 00:04:00.364 EAL: Detected lcore 6 as core 6 on socket 0 00:04:00.364 EAL: Detected lcore 7 as core 8 on socket 0 00:04:00.364 EAL: Detected lcore 8 as core 9 on socket 0 00:04:00.364 EAL: Detected lcore 9 as core 10 on socket 0 00:04:00.364 EAL: Detected lcore 10 as core 11 on socket 0 00:04:00.364 EAL: Detected lcore 11 as core 12 on socket 0 00:04:00.364 EAL: Detected lcore 12 as core 13 on socket 0 00:04:00.364 EAL: Detected lcore 13 as core 14 on socket 0 00:04:00.364 EAL: Detected lcore 14 as core 16 on socket 0 00:04:00.364 EAL: Detected lcore 15 as core 17 on socket 0 00:04:00.364 EAL: Detected lcore 16 as core 18 on socket 0 00:04:00.364 EAL: Detected lcore 17 as core 19 on socket 0 00:04:00.364 EAL: Detected lcore 18 as core 20 on socket 0 00:04:00.364 EAL: Detected lcore 19 as core 21 on socket 0 00:04:00.364 EAL: Detected lcore 20 as core 22 on socket 0 00:04:00.364 EAL: Detected lcore 21 as core 24 on socket 0 00:04:00.364 EAL: Detected lcore 22 as core 25 on socket 0 00:04:00.364 EAL: Detected lcore 23 as core 26 on socket 0 00:04:00.364 EAL: Detected lcore 24 as core 27 on socket 0 00:04:00.364 EAL: Detected lcore 25 as core 28 on socket 0 00:04:00.364 EAL: Detected lcore 26 as core 29 on socket 0 00:04:00.364 EAL: Detected lcore 27 as core 30 on socket 0 00:04:00.364 EAL: Detected lcore 28 as core 0 on socket 1 00:04:00.364 EAL: Detected lcore 29 as core 1 on socket 1 00:04:00.364 EAL: Detected lcore 30 as core 2 on socket 1 00:04:00.364 EAL: Detected lcore 31 as core 3 on socket 1 00:04:00.364 EAL: Detected lcore 32 as core 4 on socket 1 00:04:00.364 EAL: Detected lcore 33 as core 5 on socket 1 00:04:00.364 EAL: Detected lcore 34 as core 6 on socket 1 00:04:00.364 EAL: Detected lcore 35 as core 8 on socket 1 00:04:00.364 EAL: Detected lcore 36 as core 9 on socket 1 00:04:00.364 EAL: Detected lcore 37 as core 10 on socket 1 00:04:00.364 EAL: Detected lcore 38 as core 11 on socket 1 00:04:00.364 EAL: Detected lcore 39 as core 12 on socket 1 00:04:00.364 EAL: Detected lcore 40 as core 13 on socket 1 00:04:00.364 EAL: Detected lcore 41 as core 14 on socket 1 00:04:00.364 EAL: Detected lcore 42 as core 16 on socket 1 00:04:00.364 EAL: Detected lcore 43 as core 17 on socket 1 00:04:00.364 EAL: Detected lcore 44 as core 18 on socket 1 00:04:00.364 EAL: Detected lcore 45 as core 19 on socket 1 00:04:00.364 EAL: Detected lcore 46 as core 20 on socket 1 00:04:00.364 EAL: Detected lcore 47 as core 21 on socket 1 00:04:00.364 EAL: Detected lcore 48 as core 22 on socket 1 00:04:00.364 EAL: Detected lcore 49 as core 24 on socket 1 00:04:00.364 EAL: Detected lcore 50 as core 25 on socket 1 00:04:00.364 EAL: Detected lcore 51 as core 26 on socket 1 00:04:00.364 EAL: Detected lcore 52 as core 27 on socket 1 00:04:00.364 EAL: Detected lcore 53 as core 28 on socket 1 00:04:00.364 EAL: Detected lcore 54 as core 29 on socket 1 00:04:00.364 EAL: Detected lcore 55 as core 30 on socket 1 00:04:00.364 EAL: Detected lcore 56 as core 0 on socket 0 00:04:00.364 EAL: Detected lcore 57 as core 1 on socket 0 00:04:00.364 EAL: Detected lcore 58 as core 2 on socket 0 00:04:00.364 EAL: Detected lcore 59 as core 3 on socket 0 00:04:00.364 EAL: Detected lcore 60 as core 4 on socket 0 00:04:00.364 EAL: Detected lcore 61 as core 5 on socket 0 00:04:00.364 EAL: Detected lcore 62 as core 6 on socket 0 00:04:00.364 EAL: Detected lcore 63 as core 8 on socket 0 00:04:00.364 EAL: Detected lcore 64 as core 9 on socket 0 00:04:00.364 EAL: Detected lcore 65 as core 10 on socket 0 00:04:00.364 EAL: Detected lcore 66 as core 11 on socket 0 00:04:00.364 EAL: Detected lcore 67 as core 12 on socket 0 00:04:00.364 EAL: Detected lcore 68 as core 13 on socket 0 00:04:00.364 EAL: Detected lcore 69 as core 14 on socket 0 00:04:00.364 EAL: Detected lcore 70 as core 16 on socket 0 00:04:00.364 EAL: Detected lcore 71 as core 17 on socket 0 00:04:00.364 EAL: Detected lcore 72 as core 18 on socket 0 00:04:00.364 EAL: Detected lcore 73 as core 19 on socket 0 00:04:00.364 EAL: Detected lcore 74 as core 20 on socket 0 00:04:00.364 EAL: Detected lcore 75 as core 21 on socket 0 00:04:00.364 EAL: Detected lcore 76 as core 22 on socket 0 00:04:00.364 EAL: Detected lcore 77 as core 24 on socket 0 00:04:00.364 EAL: Detected lcore 78 as core 25 on socket 0 00:04:00.364 EAL: Detected lcore 79 as core 26 on socket 0 00:04:00.364 EAL: Detected lcore 80 as core 27 on socket 0 00:04:00.364 EAL: Detected lcore 81 as core 28 on socket 0 00:04:00.364 EAL: Detected lcore 82 as core 29 on socket 0 00:04:00.364 EAL: Detected lcore 83 as core 30 on socket 0 00:04:00.364 EAL: Detected lcore 84 as core 0 on socket 1 00:04:00.364 EAL: Detected lcore 85 as core 1 on socket 1 00:04:00.364 EAL: Detected lcore 86 as core 2 on socket 1 00:04:00.364 EAL: Detected lcore 87 as core 3 on socket 1 00:04:00.364 EAL: Detected lcore 88 as core 4 on socket 1 00:04:00.364 EAL: Detected lcore 89 as core 5 on socket 1 00:04:00.364 EAL: Detected lcore 90 as core 6 on socket 1 00:04:00.364 EAL: Detected lcore 91 as core 8 on socket 1 00:04:00.364 EAL: Detected lcore 92 as core 9 on socket 1 00:04:00.364 EAL: Detected lcore 93 as core 10 on socket 1 00:04:00.364 EAL: Detected lcore 94 as core 11 on socket 1 00:04:00.364 EAL: Detected lcore 95 as core 12 on socket 1 00:04:00.364 EAL: Detected lcore 96 as core 13 on socket 1 00:04:00.364 EAL: Detected lcore 97 as core 14 on socket 1 00:04:00.364 EAL: Detected lcore 98 as core 16 on socket 1 00:04:00.364 EAL: Detected lcore 99 as core 17 on socket 1 00:04:00.364 EAL: Detected lcore 100 as core 18 on socket 1 00:04:00.364 EAL: Detected lcore 101 as core 19 on socket 1 00:04:00.364 EAL: Detected lcore 102 as core 20 on socket 1 00:04:00.364 EAL: Detected lcore 103 as core 21 on socket 1 00:04:00.364 EAL: Detected lcore 104 as core 22 on socket 1 00:04:00.364 EAL: Detected lcore 105 as core 24 on socket 1 00:04:00.364 EAL: Detected lcore 106 as core 25 on socket 1 00:04:00.364 EAL: Detected lcore 107 as core 26 on socket 1 00:04:00.364 EAL: Detected lcore 108 as core 27 on socket 1 00:04:00.364 EAL: Detected lcore 109 as core 28 on socket 1 00:04:00.364 EAL: Detected lcore 110 as core 29 on socket 1 00:04:00.364 EAL: Detected lcore 111 as core 30 on socket 1 00:04:00.364 EAL: Maximum logical cores by configuration: 128 00:04:00.364 EAL: Detected CPU lcores: 112 00:04:00.364 EAL: Detected NUMA nodes: 2 00:04:00.364 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:00.364 EAL: Detected shared linkage of DPDK 00:04:00.364 EAL: No shared files mode enabled, IPC will be disabled 00:04:00.364 EAL: Bus pci wants IOVA as 'DC' 00:04:00.364 EAL: Buses did not request a specific IOVA mode. 00:04:00.364 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:00.364 EAL: Selected IOVA mode 'VA' 00:04:00.364 EAL: Probing VFIO support... 00:04:00.364 EAL: IOMMU type 1 (Type 1) is supported 00:04:00.364 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:00.364 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:00.364 EAL: VFIO support initialized 00:04:00.364 EAL: Ask a virtual area of 0x2e000 bytes 00:04:00.364 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:00.364 EAL: Setting up physically contiguous memory... 00:04:00.364 EAL: Setting maximum number of open files to 524288 00:04:00.364 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:00.364 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:00.364 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:00.364 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.364 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:00.364 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.364 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.364 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:00.364 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:00.364 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.364 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:00.364 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.364 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.364 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:00.364 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:00.364 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.364 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:00.364 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.364 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.364 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:00.364 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:00.364 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.364 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:00.364 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.364 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.364 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:00.364 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:00.364 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:00.364 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.364 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:00.364 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.364 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.365 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:00.365 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:00.365 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.365 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:00.365 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.365 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.365 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:00.365 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:00.365 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.365 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:00.365 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.365 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.365 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:00.365 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:00.365 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.365 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:00.365 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.365 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.365 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:00.365 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:00.365 EAL: Hugepages will be freed exactly as allocated. 00:04:00.365 EAL: No shared files mode enabled, IPC is disabled 00:04:00.365 EAL: No shared files mode enabled, IPC is disabled 00:04:00.365 EAL: TSC frequency is ~2500000 KHz 00:04:00.365 EAL: Main lcore 0 is ready (tid=7f5a50d73a00;cpuset=[0]) 00:04:00.365 EAL: Trying to obtain current memory policy. 00:04:00.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.365 EAL: Restoring previous memory policy: 0 00:04:00.365 EAL: request: mp_malloc_sync 00:04:00.365 EAL: No shared files mode enabled, IPC is disabled 00:04:00.365 EAL: Heap on socket 0 was expanded by 2MB 00:04:00.365 EAL: No shared files mode enabled, IPC is disabled 00:04:00.365 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:00.365 EAL: Mem event callback 'spdk:(nil)' registered 00:04:00.365 00:04:00.365 00:04:00.365 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.365 http://cunit.sourceforge.net/ 00:04:00.365 00:04:00.365 00:04:00.365 Suite: components_suite 00:04:00.365 Test: vtophys_malloc_test ...passed 00:04:00.365 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:00.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.365 EAL: Restoring previous memory policy: 4 00:04:00.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.365 EAL: request: mp_malloc_sync 00:04:00.365 EAL: No shared files mode enabled, IPC is disabled 00:04:00.365 EAL: Heap on socket 0 was expanded by 4MB 00:04:00.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.365 EAL: request: mp_malloc_sync 00:04:00.365 EAL: No shared files mode enabled, IPC is disabled 00:04:00.365 EAL: Heap on socket 0 was shrunk by 4MB 00:04:00.365 EAL: Trying to obtain current memory policy. 00:04:00.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.365 EAL: Restoring previous memory policy: 4 00:04:00.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.365 EAL: request: mp_malloc_sync 00:04:00.365 EAL: No shared files mode enabled, IPC is disabled 00:04:00.365 EAL: Heap on socket 0 was expanded by 6MB 00:04:00.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.365 EAL: request: mp_malloc_sync 00:04:00.365 EAL: No shared files mode enabled, IPC is disabled 00:04:00.365 EAL: Heap on socket 0 was shrunk by 6MB 00:04:00.365 EAL: Trying to obtain current memory policy. 00:04:00.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.365 EAL: Restoring previous memory policy: 4 00:04:00.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.365 EAL: request: mp_malloc_sync 00:04:00.365 EAL: No shared files mode enabled, IPC is disabled 00:04:00.365 EAL: Heap on socket 0 was expanded by 10MB 00:04:00.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.365 EAL: request: mp_malloc_sync 00:04:00.365 EAL: No shared files mode enabled, IPC is disabled 00:04:00.365 EAL: Heap on socket 0 was shrunk by 10MB 00:04:00.365 EAL: Trying to obtain current memory policy. 00:04:00.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.365 EAL: Restoring previous memory policy: 4 00:04:00.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.365 EAL: request: mp_malloc_sync 00:04:00.365 EAL: No shared files mode enabled, IPC is disabled 00:04:00.365 EAL: Heap on socket 0 was expanded by 18MB 00:04:00.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.365 EAL: request: mp_malloc_sync 00:04:00.365 EAL: No shared files mode enabled, IPC is disabled 00:04:00.365 EAL: Heap on socket 0 was shrunk by 18MB 00:04:00.365 EAL: Trying to obtain current memory policy. 00:04:00.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.365 EAL: Restoring previous memory policy: 4 00:04:00.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.365 EAL: request: mp_malloc_sync 00:04:00.365 EAL: No shared files mode enabled, IPC is disabled 00:04:00.365 EAL: Heap on socket 0 was expanded by 34MB 00:04:00.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.365 EAL: request: mp_malloc_sync 00:04:00.365 EAL: No shared files mode enabled, IPC is disabled 00:04:00.365 EAL: Heap on socket 0 was shrunk by 34MB 00:04:00.365 EAL: Trying to obtain current memory policy. 00:04:00.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.365 EAL: Restoring previous memory policy: 4 00:04:00.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.365 EAL: request: mp_malloc_sync 00:04:00.365 EAL: No shared files mode enabled, IPC is disabled 00:04:00.365 EAL: Heap on socket 0 was expanded by 66MB 00:04:00.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.365 EAL: request: mp_malloc_sync 00:04:00.365 EAL: No shared files mode enabled, IPC is disabled 00:04:00.365 EAL: Heap on socket 0 was shrunk by 66MB 00:04:00.365 EAL: Trying to obtain current memory policy. 00:04:00.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.365 EAL: Restoring previous memory policy: 4 00:04:00.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.365 EAL: request: mp_malloc_sync 00:04:00.365 EAL: No shared files mode enabled, IPC is disabled 00:04:00.365 EAL: Heap on socket 0 was expanded by 130MB 00:04:00.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.365 EAL: request: mp_malloc_sync 00:04:00.365 EAL: No shared files mode enabled, IPC is disabled 00:04:00.365 EAL: Heap on socket 0 was shrunk by 130MB 00:04:00.365 EAL: Trying to obtain current memory policy. 00:04:00.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.624 EAL: Restoring previous memory policy: 4 00:04:00.624 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.624 EAL: request: mp_malloc_sync 00:04:00.624 EAL: No shared files mode enabled, IPC is disabled 00:04:00.624 EAL: Heap on socket 0 was expanded by 258MB 00:04:00.624 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.624 EAL: request: mp_malloc_sync 00:04:00.624 EAL: No shared files mode enabled, IPC is disabled 00:04:00.624 EAL: Heap on socket 0 was shrunk by 258MB 00:04:00.624 EAL: Trying to obtain current memory policy. 00:04:00.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.624 EAL: Restoring previous memory policy: 4 00:04:00.624 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.624 EAL: request: mp_malloc_sync 00:04:00.624 EAL: No shared files mode enabled, IPC is disabled 00:04:00.624 EAL: Heap on socket 0 was expanded by 514MB 00:04:00.883 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.883 EAL: request: mp_malloc_sync 00:04:00.883 EAL: No shared files mode enabled, IPC is disabled 00:04:00.883 EAL: Heap on socket 0 was shrunk by 514MB 00:04:00.883 EAL: Trying to obtain current memory policy. 00:04:00.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.142 EAL: Restoring previous memory policy: 4 00:04:01.143 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.143 EAL: request: mp_malloc_sync 00:04:01.143 EAL: No shared files mode enabled, IPC is disabled 00:04:01.143 EAL: Heap on socket 0 was expanded by 1026MB 00:04:01.143 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.402 EAL: request: mp_malloc_sync 00:04:01.402 EAL: No shared files mode enabled, IPC is disabled 00:04:01.402 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:01.402 passed 00:04:01.402 00:04:01.402 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.402 suites 1 1 n/a 0 0 00:04:01.402 tests 2 2 2 0 0 00:04:01.402 asserts 497 497 497 0 n/a 00:04:01.402 00:04:01.402 Elapsed time = 0.975 seconds 00:04:01.402 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.402 EAL: request: mp_malloc_sync 00:04:01.402 EAL: No shared files mode enabled, IPC is disabled 00:04:01.402 EAL: Heap on socket 0 was shrunk by 2MB 00:04:01.402 EAL: No shared files mode enabled, IPC is disabled 00:04:01.402 EAL: No shared files mode enabled, IPC is disabled 00:04:01.402 EAL: No shared files mode enabled, IPC is disabled 00:04:01.402 00:04:01.402 real 0m1.132s 00:04:01.402 user 0m0.660s 00:04:01.402 sys 0m0.439s 00:04:01.402 17:52:09 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.402 17:52:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:01.402 ************************************ 00:04:01.402 END TEST env_vtophys 00:04:01.402 ************************************ 00:04:01.402 17:52:09 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:01.402 17:52:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.402 17:52:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.402 17:52:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.402 ************************************ 00:04:01.402 START TEST env_pci 00:04:01.402 ************************************ 00:04:01.402 17:52:09 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:01.402 00:04:01.402 00:04:01.402 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.402 http://cunit.sourceforge.net/ 00:04:01.402 00:04:01.402 00:04:01.402 Suite: pci 00:04:01.402 Test: pci_hook ...[2024-12-09 17:52:09.316376] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2158727 has claimed it 00:04:01.402 EAL: Cannot find device (10000:00:01.0) 00:04:01.402 EAL: Failed to attach device on primary process 00:04:01.402 passed 00:04:01.402 00:04:01.402 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.402 suites 1 1 n/a 0 0 00:04:01.402 tests 1 1 1 0 0 00:04:01.402 asserts 25 25 25 0 n/a 00:04:01.402 00:04:01.402 Elapsed time = 0.034 seconds 00:04:01.402 00:04:01.402 real 0m0.056s 00:04:01.402 user 0m0.011s 00:04:01.402 sys 0m0.045s 00:04:01.402 17:52:09 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.402 17:52:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:01.402 ************************************ 00:04:01.402 END TEST env_pci 00:04:01.402 ************************************ 00:04:01.662 17:52:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:01.662 17:52:09 env -- env/env.sh@15 -- # uname 00:04:01.662 17:52:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:01.662 17:52:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:01.662 17:52:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.662 17:52:09 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:01.662 17:52:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.662 17:52:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.662 ************************************ 00:04:01.662 START TEST env_dpdk_post_init 00:04:01.662 ************************************ 00:04:01.662 17:52:09 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.662 EAL: Detected CPU lcores: 112 00:04:01.662 EAL: Detected NUMA nodes: 2 00:04:01.662 EAL: Detected shared linkage of DPDK 00:04:01.662 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:01.662 EAL: Selected IOVA mode 'VA' 00:04:01.662 EAL: VFIO support initialized 00:04:01.662 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:01.662 EAL: Using IOMMU type 1 (Type 1) 00:04:01.662 EAL: Ignore mapping IO port bar(1) 00:04:01.662 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:01.662 EAL: Ignore mapping IO port bar(1) 00:04:01.662 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:01.662 EAL: Ignore mapping IO port bar(1) 00:04:01.662 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:01.922 EAL: Ignore mapping IO port bar(1) 00:04:01.922 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:01.922 EAL: Ignore mapping IO port bar(1) 00:04:01.922 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:01.922 EAL: Ignore mapping IO port bar(1) 00:04:01.922 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:01.922 EAL: Ignore mapping IO port bar(1) 00:04:01.922 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:01.922 EAL: Ignore mapping IO port bar(1) 00:04:01.922 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:01.922 EAL: Ignore mapping IO port bar(1) 00:04:01.922 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:01.922 EAL: Ignore mapping IO port bar(1) 00:04:01.922 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:01.922 EAL: Ignore mapping IO port bar(1) 00:04:01.922 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:01.922 EAL: Ignore mapping IO port bar(1) 00:04:01.922 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:01.922 EAL: Ignore mapping IO port bar(1) 00:04:01.922 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:01.922 EAL: Ignore mapping IO port bar(1) 00:04:01.922 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:01.922 EAL: Ignore mapping IO port bar(1) 00:04:01.922 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:01.922 EAL: Ignore mapping IO port bar(1) 00:04:01.922 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:02.860 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:07.052 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:07.052 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:04:07.052 Starting DPDK initialization... 00:04:07.052 Starting SPDK post initialization... 00:04:07.052 SPDK NVMe probe 00:04:07.052 Attaching to 0000:d8:00.0 00:04:07.052 Attached to 0000:d8:00.0 00:04:07.052 Cleaning up... 00:04:07.052 00:04:07.052 real 0m5.387s 00:04:07.052 user 0m3.784s 00:04:07.052 sys 0m0.656s 00:04:07.052 17:52:14 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.052 17:52:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:07.052 ************************************ 00:04:07.052 END TEST env_dpdk_post_init 00:04:07.052 ************************************ 00:04:07.052 17:52:14 env -- env/env.sh@26 -- # uname 00:04:07.052 17:52:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:07.052 17:52:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:07.052 17:52:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.052 17:52:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.052 17:52:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.052 ************************************ 00:04:07.052 START TEST env_mem_callbacks 00:04:07.052 ************************************ 00:04:07.052 17:52:14 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:07.052 EAL: Detected CPU lcores: 112 00:04:07.052 EAL: Detected NUMA nodes: 2 00:04:07.052 EAL: Detected shared linkage of DPDK 00:04:07.052 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:07.052 EAL: Selected IOVA mode 'VA' 00:04:07.052 EAL: VFIO support initialized 00:04:07.052 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:07.052 00:04:07.052 00:04:07.052 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.052 http://cunit.sourceforge.net/ 00:04:07.052 00:04:07.052 00:04:07.052 Suite: memory 00:04:07.052 Test: test ... 00:04:07.052 register 0x200000200000 2097152 00:04:07.052 malloc 3145728 00:04:07.052 register 0x200000400000 4194304 00:04:07.052 buf 0x200000500000 len 3145728 PASSED 00:04:07.052 malloc 64 00:04:07.052 buf 0x2000004fff40 len 64 PASSED 00:04:07.052 malloc 4194304 00:04:07.052 register 0x200000800000 6291456 00:04:07.052 buf 0x200000a00000 len 4194304 PASSED 00:04:07.052 free 0x200000500000 3145728 00:04:07.052 free 0x2000004fff40 64 00:04:07.052 unregister 0x200000400000 4194304 PASSED 00:04:07.052 free 0x200000a00000 4194304 00:04:07.052 unregister 0x200000800000 6291456 PASSED 00:04:07.052 malloc 8388608 00:04:07.052 register 0x200000400000 10485760 00:04:07.052 buf 0x200000600000 len 8388608 PASSED 00:04:07.052 free 0x200000600000 8388608 00:04:07.052 unregister 0x200000400000 10485760 PASSED 00:04:07.052 passed 00:04:07.052 00:04:07.052 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.052 suites 1 1 n/a 0 0 00:04:07.052 tests 1 1 1 0 0 00:04:07.052 asserts 15 15 15 0 n/a 00:04:07.052 00:04:07.052 Elapsed time = 0.009 seconds 00:04:07.052 00:04:07.052 real 0m0.071s 00:04:07.052 user 0m0.025s 00:04:07.052 sys 0m0.046s 00:04:07.052 17:52:14 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.052 17:52:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:07.052 ************************************ 00:04:07.052 END TEST env_mem_callbacks 00:04:07.052 ************************************ 00:04:07.311 00:04:07.311 real 0m7.423s 00:04:07.311 user 0m4.879s 00:04:07.311 sys 0m1.612s 00:04:07.311 17:52:15 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.311 17:52:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.311 ************************************ 00:04:07.311 END TEST env 00:04:07.311 ************************************ 00:04:07.311 17:52:15 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:07.311 17:52:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.311 17:52:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.311 17:52:15 -- common/autotest_common.sh@10 -- # set +x 00:04:07.311 ************************************ 00:04:07.311 START TEST rpc 00:04:07.311 ************************************ 00:04:07.311 17:52:15 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:07.311 * Looking for test storage... 00:04:07.311 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:07.311 17:52:15 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:07.311 17:52:15 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:07.311 17:52:15 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:07.570 17:52:15 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:07.570 17:52:15 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.570 17:52:15 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.570 17:52:15 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.570 17:52:15 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.570 17:52:15 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.570 17:52:15 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.570 17:52:15 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.570 17:52:15 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.570 17:52:15 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.570 17:52:15 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.570 17:52:15 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.570 17:52:15 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:07.570 17:52:15 rpc -- scripts/common.sh@345 -- # : 1 00:04:07.570 17:52:15 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.570 17:52:15 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.570 17:52:15 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:07.570 17:52:15 rpc -- scripts/common.sh@353 -- # local d=1 00:04:07.570 17:52:15 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.570 17:52:15 rpc -- scripts/common.sh@355 -- # echo 1 00:04:07.570 17:52:15 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.570 17:52:15 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:07.570 17:52:15 rpc -- scripts/common.sh@353 -- # local d=2 00:04:07.570 17:52:15 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.570 17:52:15 rpc -- scripts/common.sh@355 -- # echo 2 00:04:07.570 17:52:15 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.570 17:52:15 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.570 17:52:15 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.570 17:52:15 rpc -- scripts/common.sh@368 -- # return 0 00:04:07.570 17:52:15 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.570 17:52:15 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:07.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.570 --rc genhtml_branch_coverage=1 00:04:07.570 --rc genhtml_function_coverage=1 00:04:07.570 --rc genhtml_legend=1 00:04:07.570 --rc geninfo_all_blocks=1 00:04:07.570 --rc geninfo_unexecuted_blocks=1 00:04:07.570 00:04:07.570 ' 00:04:07.570 17:52:15 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:07.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.570 --rc genhtml_branch_coverage=1 00:04:07.570 --rc genhtml_function_coverage=1 00:04:07.571 --rc genhtml_legend=1 00:04:07.571 --rc geninfo_all_blocks=1 00:04:07.571 --rc geninfo_unexecuted_blocks=1 00:04:07.571 00:04:07.571 ' 00:04:07.571 17:52:15 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:07.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.571 --rc genhtml_branch_coverage=1 00:04:07.571 --rc genhtml_function_coverage=1 00:04:07.571 --rc genhtml_legend=1 00:04:07.571 --rc geninfo_all_blocks=1 00:04:07.571 --rc geninfo_unexecuted_blocks=1 00:04:07.571 00:04:07.571 ' 00:04:07.571 17:52:15 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:07.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.571 --rc genhtml_branch_coverage=1 00:04:07.571 --rc genhtml_function_coverage=1 00:04:07.571 --rc genhtml_legend=1 00:04:07.571 --rc geninfo_all_blocks=1 00:04:07.571 --rc geninfo_unexecuted_blocks=1 00:04:07.571 00:04:07.571 ' 00:04:07.571 17:52:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2159922 00:04:07.571 17:52:15 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:07.571 17:52:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.571 17:52:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2159922 00:04:07.571 17:52:15 rpc -- common/autotest_common.sh@835 -- # '[' -z 2159922 ']' 00:04:07.571 17:52:15 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.571 17:52:15 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.571 17:52:15 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.571 17:52:15 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.571 17:52:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.571 [2024-12-09 17:52:15.381546] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:07.571 [2024-12-09 17:52:15.381595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159922 ] 00:04:07.571 [2024-12-09 17:52:15.471881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.571 [2024-12-09 17:52:15.513686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:07.571 [2024-12-09 17:52:15.513720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2159922' to capture a snapshot of events at runtime. 00:04:07.571 [2024-12-09 17:52:15.513730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:07.571 [2024-12-09 17:52:15.513739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:07.571 [2024-12-09 17:52:15.513746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2159922 for offline analysis/debug. 00:04:07.571 [2024-12-09 17:52:15.514374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.508 17:52:16 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:08.508 17:52:16 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:08.508 17:52:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:08.508 17:52:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:08.508 17:52:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:08.508 17:52:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:08.508 17:52:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.508 17:52:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.508 17:52:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.508 ************************************ 00:04:08.508 START TEST rpc_integrity 00:04:08.508 ************************************ 00:04:08.508 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:08.508 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:08.508 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.508 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.508 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.508 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:08.508 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:08.508 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:08.508 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:08.508 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.508 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.508 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.508 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:08.508 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:08.508 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.508 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.508 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.508 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:08.508 { 00:04:08.508 "name": "Malloc0", 00:04:08.508 "aliases": [ 00:04:08.508 "f062c4d6-37de-4450-af65-453172e1fe37" 00:04:08.508 ], 00:04:08.508 "product_name": "Malloc disk", 00:04:08.508 "block_size": 512, 00:04:08.508 "num_blocks": 16384, 00:04:08.508 "uuid": "f062c4d6-37de-4450-af65-453172e1fe37", 00:04:08.508 "assigned_rate_limits": { 00:04:08.508 "rw_ios_per_sec": 0, 00:04:08.508 "rw_mbytes_per_sec": 0, 00:04:08.508 "r_mbytes_per_sec": 0, 00:04:08.508 "w_mbytes_per_sec": 0 00:04:08.508 }, 00:04:08.508 "claimed": false, 00:04:08.508 "zoned": false, 00:04:08.508 "supported_io_types": { 00:04:08.508 "read": true, 00:04:08.508 "write": true, 00:04:08.508 "unmap": true, 00:04:08.508 "flush": true, 00:04:08.508 "reset": true, 00:04:08.508 "nvme_admin": false, 00:04:08.508 "nvme_io": false, 00:04:08.508 "nvme_io_md": false, 00:04:08.508 "write_zeroes": true, 00:04:08.508 "zcopy": true, 00:04:08.508 "get_zone_info": false, 00:04:08.508 "zone_management": false, 00:04:08.508 "zone_append": false, 00:04:08.508 "compare": false, 00:04:08.508 "compare_and_write": false, 00:04:08.508 "abort": true, 00:04:08.508 "seek_hole": false, 00:04:08.508 "seek_data": false, 00:04:08.508 "copy": true, 00:04:08.508 "nvme_iov_md": false 00:04:08.508 }, 00:04:08.508 "memory_domains": [ 00:04:08.508 { 00:04:08.508 "dma_device_id": "system", 00:04:08.508 "dma_device_type": 1 00:04:08.508 }, 00:04:08.508 { 00:04:08.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.508 "dma_device_type": 2 00:04:08.508 } 00:04:08.508 ], 00:04:08.508 "driver_specific": {} 00:04:08.508 } 00:04:08.508 ]' 00:04:08.508 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:08.508 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:08.508 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:08.508 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.508 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.508 [2024-12-09 17:52:16.362763] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:08.508 [2024-12-09 17:52:16.362793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:08.508 [2024-12-09 17:52:16.362808] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21b4c60 00:04:08.508 [2024-12-09 17:52:16.362816] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:08.508 [2024-12-09 17:52:16.363924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:08.508 [2024-12-09 17:52:16.363955] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:08.508 Passthru0 00:04:08.508 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.508 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:08.508 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.508 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.508 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.508 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:08.508 { 00:04:08.508 "name": "Malloc0", 00:04:08.508 "aliases": [ 00:04:08.508 "f062c4d6-37de-4450-af65-453172e1fe37" 00:04:08.508 ], 00:04:08.508 "product_name": "Malloc disk", 00:04:08.508 "block_size": 512, 00:04:08.508 "num_blocks": 16384, 00:04:08.508 "uuid": "f062c4d6-37de-4450-af65-453172e1fe37", 00:04:08.508 "assigned_rate_limits": { 00:04:08.508 "rw_ios_per_sec": 0, 00:04:08.508 "rw_mbytes_per_sec": 0, 00:04:08.508 "r_mbytes_per_sec": 0, 00:04:08.508 "w_mbytes_per_sec": 0 00:04:08.508 }, 00:04:08.508 "claimed": true, 00:04:08.508 "claim_type": "exclusive_write", 00:04:08.508 "zoned": false, 00:04:08.508 "supported_io_types": { 00:04:08.508 "read": true, 00:04:08.508 "write": true, 00:04:08.508 "unmap": true, 00:04:08.508 "flush": true, 00:04:08.508 "reset": true, 00:04:08.508 "nvme_admin": false, 00:04:08.508 "nvme_io": false, 00:04:08.508 "nvme_io_md": false, 00:04:08.508 "write_zeroes": true, 00:04:08.508 "zcopy": true, 00:04:08.508 "get_zone_info": false, 00:04:08.508 "zone_management": false, 00:04:08.508 "zone_append": false, 00:04:08.508 "compare": false, 00:04:08.508 "compare_and_write": false, 00:04:08.508 "abort": true, 00:04:08.508 "seek_hole": false, 00:04:08.508 "seek_data": false, 00:04:08.508 "copy": true, 00:04:08.508 "nvme_iov_md": false 00:04:08.508 }, 00:04:08.508 "memory_domains": [ 00:04:08.508 { 00:04:08.508 "dma_device_id": "system", 00:04:08.508 "dma_device_type": 1 00:04:08.508 }, 00:04:08.508 { 00:04:08.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.508 "dma_device_type": 2 00:04:08.508 } 00:04:08.508 ], 00:04:08.508 "driver_specific": {} 00:04:08.508 }, 00:04:08.508 { 00:04:08.508 "name": "Passthru0", 00:04:08.508 "aliases": [ 00:04:08.508 "22995681-aa9b-542d-a765-434ef3a21dd0" 00:04:08.508 ], 00:04:08.508 "product_name": "passthru", 00:04:08.508 "block_size": 512, 00:04:08.508 "num_blocks": 16384, 00:04:08.508 "uuid": "22995681-aa9b-542d-a765-434ef3a21dd0", 00:04:08.508 "assigned_rate_limits": { 00:04:08.508 "rw_ios_per_sec": 0, 00:04:08.508 "rw_mbytes_per_sec": 0, 00:04:08.508 "r_mbytes_per_sec": 0, 00:04:08.508 "w_mbytes_per_sec": 0 00:04:08.508 }, 00:04:08.508 "claimed": false, 00:04:08.508 "zoned": false, 00:04:08.508 "supported_io_types": { 00:04:08.508 "read": true, 00:04:08.508 "write": true, 00:04:08.508 "unmap": true, 00:04:08.508 "flush": true, 00:04:08.508 "reset": true, 00:04:08.508 "nvme_admin": false, 00:04:08.508 "nvme_io": false, 00:04:08.508 "nvme_io_md": false, 00:04:08.508 "write_zeroes": true, 00:04:08.508 "zcopy": true, 00:04:08.508 "get_zone_info": false, 00:04:08.508 "zone_management": false, 00:04:08.508 "zone_append": false, 00:04:08.508 "compare": false, 00:04:08.508 "compare_and_write": false, 00:04:08.508 "abort": true, 00:04:08.508 "seek_hole": false, 00:04:08.508 "seek_data": false, 00:04:08.508 "copy": true, 00:04:08.508 "nvme_iov_md": false 00:04:08.508 }, 00:04:08.508 "memory_domains": [ 00:04:08.508 { 00:04:08.508 "dma_device_id": "system", 00:04:08.508 "dma_device_type": 1 00:04:08.508 }, 00:04:08.508 { 00:04:08.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.509 "dma_device_type": 2 00:04:08.509 } 00:04:08.509 ], 00:04:08.509 "driver_specific": { 00:04:08.509 "passthru": { 00:04:08.509 "name": "Passthru0", 00:04:08.509 "base_bdev_name": "Malloc0" 00:04:08.509 } 00:04:08.509 } 00:04:08.509 } 00:04:08.509 ]' 00:04:08.509 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:08.509 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:08.509 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:08.509 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.509 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.509 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.509 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:08.509 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.509 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.509 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.509 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:08.509 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.509 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.509 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.509 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:08.509 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:08.768 17:52:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:08.768 00:04:08.768 real 0m0.298s 00:04:08.768 user 0m0.176s 00:04:08.768 sys 0m0.057s 00:04:08.768 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.768 17:52:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.768 ************************************ 00:04:08.768 END TEST rpc_integrity 00:04:08.768 ************************************ 00:04:08.768 17:52:16 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:08.768 17:52:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.768 17:52:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.768 17:52:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.768 ************************************ 00:04:08.768 START TEST rpc_plugins 00:04:08.768 ************************************ 00:04:08.768 17:52:16 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:08.768 17:52:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:08.768 17:52:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.768 17:52:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.768 17:52:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.768 17:52:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:08.768 17:52:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:08.768 17:52:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.768 17:52:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.768 17:52:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.768 17:52:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:08.768 { 00:04:08.768 "name": "Malloc1", 00:04:08.768 "aliases": [ 00:04:08.768 "c975a26d-a4df-4659-aa7d-0baa80a7c0c0" 00:04:08.768 ], 00:04:08.768 "product_name": "Malloc disk", 00:04:08.768 "block_size": 4096, 00:04:08.768 "num_blocks": 256, 00:04:08.768 "uuid": "c975a26d-a4df-4659-aa7d-0baa80a7c0c0", 00:04:08.768 "assigned_rate_limits": { 00:04:08.768 "rw_ios_per_sec": 0, 00:04:08.768 "rw_mbytes_per_sec": 0, 00:04:08.768 "r_mbytes_per_sec": 0, 00:04:08.768 "w_mbytes_per_sec": 0 00:04:08.768 }, 00:04:08.768 "claimed": false, 00:04:08.768 "zoned": false, 00:04:08.768 "supported_io_types": { 00:04:08.768 "read": true, 00:04:08.768 "write": true, 00:04:08.768 "unmap": true, 00:04:08.768 "flush": true, 00:04:08.768 "reset": true, 00:04:08.768 "nvme_admin": false, 00:04:08.768 "nvme_io": false, 00:04:08.768 "nvme_io_md": false, 00:04:08.768 "write_zeroes": true, 00:04:08.768 "zcopy": true, 00:04:08.768 "get_zone_info": false, 00:04:08.768 "zone_management": false, 00:04:08.768 "zone_append": false, 00:04:08.768 "compare": false, 00:04:08.768 "compare_and_write": false, 00:04:08.768 "abort": true, 00:04:08.768 "seek_hole": false, 00:04:08.768 "seek_data": false, 00:04:08.768 "copy": true, 00:04:08.768 "nvme_iov_md": false 00:04:08.768 }, 00:04:08.768 "memory_domains": [ 00:04:08.768 { 00:04:08.768 "dma_device_id": "system", 00:04:08.769 "dma_device_type": 1 00:04:08.769 }, 00:04:08.769 { 00:04:08.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.769 "dma_device_type": 2 00:04:08.769 } 00:04:08.769 ], 00:04:08.769 "driver_specific": {} 00:04:08.769 } 00:04:08.769 ]' 00:04:08.769 17:52:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:08.769 17:52:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:08.769 17:52:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:08.769 17:52:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.769 17:52:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.769 17:52:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.769 17:52:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:08.769 17:52:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.769 17:52:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.769 17:52:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.769 17:52:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:08.769 17:52:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:09.028 17:52:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:09.028 00:04:09.028 real 0m0.149s 00:04:09.028 user 0m0.086s 00:04:09.028 sys 0m0.027s 00:04:09.028 17:52:16 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.028 17:52:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.028 ************************************ 00:04:09.028 END TEST rpc_plugins 00:04:09.028 ************************************ 00:04:09.028 17:52:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:09.028 17:52:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.028 17:52:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.028 17:52:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.028 ************************************ 00:04:09.028 START TEST rpc_trace_cmd_test 00:04:09.028 ************************************ 00:04:09.028 17:52:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:09.028 17:52:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:09.028 17:52:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:09.028 17:52:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.028 17:52:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:09.028 17:52:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.028 17:52:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:09.028 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2159922", 00:04:09.028 "tpoint_group_mask": "0x8", 00:04:09.028 "iscsi_conn": { 00:04:09.028 "mask": "0x2", 00:04:09.028 "tpoint_mask": "0x0" 00:04:09.028 }, 00:04:09.028 "scsi": { 00:04:09.028 "mask": "0x4", 00:04:09.028 "tpoint_mask": "0x0" 00:04:09.028 }, 00:04:09.028 "bdev": { 00:04:09.028 "mask": "0x8", 00:04:09.028 "tpoint_mask": "0xffffffffffffffff" 00:04:09.028 }, 00:04:09.028 "nvmf_rdma": { 00:04:09.028 "mask": "0x10", 00:04:09.028 "tpoint_mask": "0x0" 00:04:09.028 }, 00:04:09.028 "nvmf_tcp": { 00:04:09.028 "mask": "0x20", 00:04:09.028 "tpoint_mask": "0x0" 00:04:09.028 }, 00:04:09.028 "ftl": { 00:04:09.028 "mask": "0x40", 00:04:09.028 "tpoint_mask": "0x0" 00:04:09.028 }, 00:04:09.028 "blobfs": { 00:04:09.028 "mask": "0x80", 00:04:09.028 "tpoint_mask": "0x0" 00:04:09.028 }, 00:04:09.028 "dsa": { 00:04:09.028 "mask": "0x200", 00:04:09.028 "tpoint_mask": "0x0" 00:04:09.028 }, 00:04:09.028 "thread": { 00:04:09.028 "mask": "0x400", 00:04:09.028 "tpoint_mask": "0x0" 00:04:09.028 }, 00:04:09.028 "nvme_pcie": { 00:04:09.028 "mask": "0x800", 00:04:09.028 "tpoint_mask": "0x0" 00:04:09.028 }, 00:04:09.028 "iaa": { 00:04:09.028 "mask": "0x1000", 00:04:09.028 "tpoint_mask": "0x0" 00:04:09.028 }, 00:04:09.028 "nvme_tcp": { 00:04:09.028 "mask": "0x2000", 00:04:09.028 "tpoint_mask": "0x0" 00:04:09.028 }, 00:04:09.028 "bdev_nvme": { 00:04:09.028 "mask": "0x4000", 00:04:09.028 "tpoint_mask": "0x0" 00:04:09.028 }, 00:04:09.028 "sock": { 00:04:09.028 "mask": "0x8000", 00:04:09.028 "tpoint_mask": "0x0" 00:04:09.028 }, 00:04:09.028 "blob": { 00:04:09.028 "mask": "0x10000", 00:04:09.028 "tpoint_mask": "0x0" 00:04:09.028 }, 00:04:09.028 "bdev_raid": { 00:04:09.028 "mask": "0x20000", 00:04:09.028 "tpoint_mask": "0x0" 00:04:09.028 }, 00:04:09.028 "scheduler": { 00:04:09.028 "mask": "0x40000", 00:04:09.028 "tpoint_mask": "0x0" 00:04:09.028 } 00:04:09.028 }' 00:04:09.028 17:52:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:09.028 17:52:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:09.028 17:52:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:09.028 17:52:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:09.028 17:52:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:09.028 17:52:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:09.028 17:52:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:09.288 17:52:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:09.288 17:52:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:09.288 17:52:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:09.288 00:04:09.288 real 0m0.214s 00:04:09.288 user 0m0.169s 00:04:09.288 sys 0m0.036s 00:04:09.288 17:52:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.288 17:52:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:09.288 ************************************ 00:04:09.288 END TEST rpc_trace_cmd_test 00:04:09.288 ************************************ 00:04:09.288 17:52:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:09.288 17:52:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:09.288 17:52:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:09.288 17:52:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.288 17:52:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.288 17:52:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.288 ************************************ 00:04:09.288 START TEST rpc_daemon_integrity 00:04:09.288 ************************************ 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:09.288 { 00:04:09.288 "name": "Malloc2", 00:04:09.288 "aliases": [ 00:04:09.288 "3f1abe3f-ee46-4732-ae5f-f94871c2fc1c" 00:04:09.288 ], 00:04:09.288 "product_name": "Malloc disk", 00:04:09.288 "block_size": 512, 00:04:09.288 "num_blocks": 16384, 00:04:09.288 "uuid": "3f1abe3f-ee46-4732-ae5f-f94871c2fc1c", 00:04:09.288 "assigned_rate_limits": { 00:04:09.288 "rw_ios_per_sec": 0, 00:04:09.288 "rw_mbytes_per_sec": 0, 00:04:09.288 "r_mbytes_per_sec": 0, 00:04:09.288 "w_mbytes_per_sec": 0 00:04:09.288 }, 00:04:09.288 "claimed": false, 00:04:09.288 "zoned": false, 00:04:09.288 "supported_io_types": { 00:04:09.288 "read": true, 00:04:09.288 "write": true, 00:04:09.288 "unmap": true, 00:04:09.288 "flush": true, 00:04:09.288 "reset": true, 00:04:09.288 "nvme_admin": false, 00:04:09.288 "nvme_io": false, 00:04:09.288 "nvme_io_md": false, 00:04:09.288 "write_zeroes": true, 00:04:09.288 "zcopy": true, 00:04:09.288 "get_zone_info": false, 00:04:09.288 "zone_management": false, 00:04:09.288 "zone_append": false, 00:04:09.288 "compare": false, 00:04:09.288 "compare_and_write": false, 00:04:09.288 "abort": true, 00:04:09.288 "seek_hole": false, 00:04:09.288 "seek_data": false, 00:04:09.288 "copy": true, 00:04:09.288 "nvme_iov_md": false 00:04:09.288 }, 00:04:09.288 "memory_domains": [ 00:04:09.288 { 00:04:09.288 "dma_device_id": "system", 00:04:09.288 "dma_device_type": 1 00:04:09.288 }, 00:04:09.288 { 00:04:09.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.288 "dma_device_type": 2 00:04:09.288 } 00:04:09.288 ], 00:04:09.288 "driver_specific": {} 00:04:09.288 } 00:04:09.288 ]' 00:04:09.288 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:09.547 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.547 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:09.547 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.547 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.547 [2024-12-09 17:52:17.273210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:09.547 [2024-12-09 17:52:17.273239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.547 [2024-12-09 17:52:17.273255] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21b6b60 00:04:09.547 [2024-12-09 17:52:17.273263] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.547 [2024-12-09 17:52:17.274236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.547 [2024-12-09 17:52:17.274258] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.547 Passthru0 00:04:09.547 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.547 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.547 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.547 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.548 { 00:04:09.548 "name": "Malloc2", 00:04:09.548 "aliases": [ 00:04:09.548 "3f1abe3f-ee46-4732-ae5f-f94871c2fc1c" 00:04:09.548 ], 00:04:09.548 "product_name": "Malloc disk", 00:04:09.548 "block_size": 512, 00:04:09.548 "num_blocks": 16384, 00:04:09.548 "uuid": "3f1abe3f-ee46-4732-ae5f-f94871c2fc1c", 00:04:09.548 "assigned_rate_limits": { 00:04:09.548 "rw_ios_per_sec": 0, 00:04:09.548 "rw_mbytes_per_sec": 0, 00:04:09.548 "r_mbytes_per_sec": 0, 00:04:09.548 "w_mbytes_per_sec": 0 00:04:09.548 }, 00:04:09.548 "claimed": true, 00:04:09.548 "claim_type": "exclusive_write", 00:04:09.548 "zoned": false, 00:04:09.548 "supported_io_types": { 00:04:09.548 "read": true, 00:04:09.548 "write": true, 00:04:09.548 "unmap": true, 00:04:09.548 "flush": true, 00:04:09.548 "reset": true, 00:04:09.548 "nvme_admin": false, 00:04:09.548 "nvme_io": false, 00:04:09.548 "nvme_io_md": false, 00:04:09.548 "write_zeroes": true, 00:04:09.548 "zcopy": true, 00:04:09.548 "get_zone_info": false, 00:04:09.548 "zone_management": false, 00:04:09.548 "zone_append": false, 00:04:09.548 "compare": false, 00:04:09.548 "compare_and_write": false, 00:04:09.548 "abort": true, 00:04:09.548 "seek_hole": false, 00:04:09.548 "seek_data": false, 00:04:09.548 "copy": true, 00:04:09.548 "nvme_iov_md": false 00:04:09.548 }, 00:04:09.548 "memory_domains": [ 00:04:09.548 { 00:04:09.548 "dma_device_id": "system", 00:04:09.548 "dma_device_type": 1 00:04:09.548 }, 00:04:09.548 { 00:04:09.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.548 "dma_device_type": 2 00:04:09.548 } 00:04:09.548 ], 00:04:09.548 "driver_specific": {} 00:04:09.548 }, 00:04:09.548 { 00:04:09.548 "name": "Passthru0", 00:04:09.548 "aliases": [ 00:04:09.548 "9921425b-ab98-5536-91f9-6c79d1a4a5e1" 00:04:09.548 ], 00:04:09.548 "product_name": "passthru", 00:04:09.548 "block_size": 512, 00:04:09.548 "num_blocks": 16384, 00:04:09.548 "uuid": "9921425b-ab98-5536-91f9-6c79d1a4a5e1", 00:04:09.548 "assigned_rate_limits": { 00:04:09.548 "rw_ios_per_sec": 0, 00:04:09.548 "rw_mbytes_per_sec": 0, 00:04:09.548 "r_mbytes_per_sec": 0, 00:04:09.548 "w_mbytes_per_sec": 0 00:04:09.548 }, 00:04:09.548 "claimed": false, 00:04:09.548 "zoned": false, 00:04:09.548 "supported_io_types": { 00:04:09.548 "read": true, 00:04:09.548 "write": true, 00:04:09.548 "unmap": true, 00:04:09.548 "flush": true, 00:04:09.548 "reset": true, 00:04:09.548 "nvme_admin": false, 00:04:09.548 "nvme_io": false, 00:04:09.548 "nvme_io_md": false, 00:04:09.548 "write_zeroes": true, 00:04:09.548 "zcopy": true, 00:04:09.548 "get_zone_info": false, 00:04:09.548 "zone_management": false, 00:04:09.548 "zone_append": false, 00:04:09.548 "compare": false, 00:04:09.548 "compare_and_write": false, 00:04:09.548 "abort": true, 00:04:09.548 "seek_hole": false, 00:04:09.548 "seek_data": false, 00:04:09.548 "copy": true, 00:04:09.548 "nvme_iov_md": false 00:04:09.548 }, 00:04:09.548 "memory_domains": [ 00:04:09.548 { 00:04:09.548 "dma_device_id": "system", 00:04:09.548 "dma_device_type": 1 00:04:09.548 }, 00:04:09.548 { 00:04:09.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.548 "dma_device_type": 2 00:04:09.548 } 00:04:09.548 ], 00:04:09.548 "driver_specific": { 00:04:09.548 "passthru": { 00:04:09.548 "name": "Passthru0", 00:04:09.548 "base_bdev_name": "Malloc2" 00:04:09.548 } 00:04:09.548 } 00:04:09.548 } 00:04:09.548 ]' 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.548 00:04:09.548 real 0m0.292s 00:04:09.548 user 0m0.172s 00:04:09.548 sys 0m0.058s 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.548 17:52:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.548 ************************************ 00:04:09.548 END TEST rpc_daemon_integrity 00:04:09.548 ************************************ 00:04:09.548 17:52:17 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:09.548 17:52:17 rpc -- rpc/rpc.sh@84 -- # killprocess 2159922 00:04:09.548 17:52:17 rpc -- common/autotest_common.sh@954 -- # '[' -z 2159922 ']' 00:04:09.548 17:52:17 rpc -- common/autotest_common.sh@958 -- # kill -0 2159922 00:04:09.548 17:52:17 rpc -- common/autotest_common.sh@959 -- # uname 00:04:09.548 17:52:17 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:09.548 17:52:17 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2159922 00:04:09.807 17:52:17 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:09.807 17:52:17 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:09.807 17:52:17 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2159922' 00:04:09.807 killing process with pid 2159922 00:04:09.807 17:52:17 rpc -- common/autotest_common.sh@973 -- # kill 2159922 00:04:09.807 17:52:17 rpc -- common/autotest_common.sh@978 -- # wait 2159922 00:04:10.066 00:04:10.066 real 0m2.705s 00:04:10.066 user 0m3.351s 00:04:10.066 sys 0m0.899s 00:04:10.066 17:52:17 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.066 17:52:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.066 ************************************ 00:04:10.066 END TEST rpc 00:04:10.066 ************************************ 00:04:10.066 17:52:17 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:10.066 17:52:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.066 17:52:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.066 17:52:17 -- common/autotest_common.sh@10 -- # set +x 00:04:10.066 ************************************ 00:04:10.066 START TEST skip_rpc 00:04:10.066 ************************************ 00:04:10.067 17:52:17 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:10.067 * Looking for test storage... 00:04:10.067 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:10.067 17:52:18 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:10.067 17:52:18 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:10.067 17:52:18 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:10.326 17:52:18 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.326 17:52:18 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:10.326 17:52:18 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.326 17:52:18 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:10.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.326 --rc genhtml_branch_coverage=1 00:04:10.326 --rc genhtml_function_coverage=1 00:04:10.326 --rc genhtml_legend=1 00:04:10.326 --rc geninfo_all_blocks=1 00:04:10.326 --rc geninfo_unexecuted_blocks=1 00:04:10.326 00:04:10.326 ' 00:04:10.326 17:52:18 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:10.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.326 --rc genhtml_branch_coverage=1 00:04:10.326 --rc genhtml_function_coverage=1 00:04:10.326 --rc genhtml_legend=1 00:04:10.326 --rc geninfo_all_blocks=1 00:04:10.326 --rc geninfo_unexecuted_blocks=1 00:04:10.326 00:04:10.326 ' 00:04:10.326 17:52:18 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:10.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.326 --rc genhtml_branch_coverage=1 00:04:10.326 --rc genhtml_function_coverage=1 00:04:10.326 --rc genhtml_legend=1 00:04:10.326 --rc geninfo_all_blocks=1 00:04:10.326 --rc geninfo_unexecuted_blocks=1 00:04:10.326 00:04:10.326 ' 00:04:10.326 17:52:18 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:10.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.326 --rc genhtml_branch_coverage=1 00:04:10.326 --rc genhtml_function_coverage=1 00:04:10.326 --rc genhtml_legend=1 00:04:10.326 --rc geninfo_all_blocks=1 00:04:10.326 --rc geninfo_unexecuted_blocks=1 00:04:10.326 00:04:10.326 ' 00:04:10.326 17:52:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:10.326 17:52:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:10.326 17:52:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:10.326 17:52:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.327 17:52:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.327 17:52:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.327 ************************************ 00:04:10.327 START TEST skip_rpc 00:04:10.327 ************************************ 00:04:10.327 17:52:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:10.327 17:52:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2160643 00:04:10.327 17:52:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.327 17:52:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:10.327 17:52:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:10.327 [2024-12-09 17:52:18.225931] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:10.327 [2024-12-09 17:52:18.225981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2160643 ] 00:04:10.586 [2024-12-09 17:52:18.317380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.586 [2024-12-09 17:52:18.356098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2160643 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2160643 ']' 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2160643 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2160643 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2160643' 00:04:15.861 killing process with pid 2160643 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2160643 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2160643 00:04:15.861 00:04:15.861 real 0m5.385s 00:04:15.861 user 0m5.116s 00:04:15.861 sys 0m0.319s 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.861 17:52:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.861 ************************************ 00:04:15.861 END TEST skip_rpc 00:04:15.861 ************************************ 00:04:15.861 17:52:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:15.861 17:52:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.861 17:52:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.861 17:52:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.861 ************************************ 00:04:15.861 START TEST skip_rpc_with_json 00:04:15.861 ************************************ 00:04:15.861 17:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:15.861 17:52:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:15.861 17:52:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2161476 00:04:15.861 17:52:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.861 17:52:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:15.861 17:52:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2161476 00:04:15.861 17:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2161476 ']' 00:04:15.861 17:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.861 17:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.861 17:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.861 17:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.861 17:52:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.861 [2024-12-09 17:52:23.697166] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:15.861 [2024-12-09 17:52:23.697215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2161476 ] 00:04:15.862 [2024-12-09 17:52:23.790551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.862 [2024-12-09 17:52:23.831844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.800 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.800 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:16.800 17:52:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:16.800 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.800 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.800 [2024-12-09 17:52:24.536311] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:16.800 request: 00:04:16.800 { 00:04:16.800 "trtype": "tcp", 00:04:16.800 "method": "nvmf_get_transports", 00:04:16.800 "req_id": 1 00:04:16.800 } 00:04:16.800 Got JSON-RPC error response 00:04:16.800 response: 00:04:16.800 { 00:04:16.800 "code": -19, 00:04:16.800 "message": "No such device" 00:04:16.800 } 00:04:16.800 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:16.800 17:52:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:16.800 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.800 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.800 [2024-12-09 17:52:24.548416] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.800 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.800 17:52:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:16.800 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.800 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.800 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.800 17:52:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:16.800 { 00:04:16.800 "subsystems": [ 00:04:16.800 { 00:04:16.800 "subsystem": "fsdev", 00:04:16.800 "config": [ 00:04:16.800 { 00:04:16.800 "method": "fsdev_set_opts", 00:04:16.800 "params": { 00:04:16.800 "fsdev_io_pool_size": 65535, 00:04:16.800 "fsdev_io_cache_size": 256 00:04:16.800 } 00:04:16.800 } 00:04:16.800 ] 00:04:16.800 }, 00:04:16.800 { 00:04:16.800 "subsystem": "keyring", 00:04:16.800 "config": [] 00:04:16.800 }, 00:04:16.800 { 00:04:16.800 "subsystem": "iobuf", 00:04:16.800 "config": [ 00:04:16.800 { 00:04:16.800 "method": "iobuf_set_options", 00:04:16.800 "params": { 00:04:16.800 "small_pool_count": 8192, 00:04:16.800 "large_pool_count": 1024, 00:04:16.800 "small_bufsize": 8192, 00:04:16.800 "large_bufsize": 135168, 00:04:16.800 "enable_numa": false 00:04:16.800 } 00:04:16.800 } 00:04:16.800 ] 00:04:16.800 }, 00:04:16.800 { 00:04:16.800 "subsystem": "sock", 00:04:16.800 "config": [ 00:04:16.800 { 00:04:16.800 "method": "sock_set_default_impl", 00:04:16.800 "params": { 00:04:16.800 "impl_name": "posix" 00:04:16.800 } 00:04:16.800 }, 00:04:16.800 { 00:04:16.800 "method": "sock_impl_set_options", 00:04:16.800 "params": { 00:04:16.800 "impl_name": "ssl", 00:04:16.800 "recv_buf_size": 4096, 00:04:16.800 "send_buf_size": 4096, 00:04:16.800 "enable_recv_pipe": true, 00:04:16.800 "enable_quickack": false, 00:04:16.800 "enable_placement_id": 0, 00:04:16.800 "enable_zerocopy_send_server": true, 00:04:16.800 "enable_zerocopy_send_client": false, 00:04:16.800 "zerocopy_threshold": 0, 00:04:16.800 "tls_version": 0, 00:04:16.800 "enable_ktls": false 00:04:16.800 } 00:04:16.800 }, 00:04:16.800 { 00:04:16.800 "method": "sock_impl_set_options", 00:04:16.800 "params": { 00:04:16.800 "impl_name": "posix", 00:04:16.800 "recv_buf_size": 2097152, 00:04:16.800 "send_buf_size": 2097152, 00:04:16.800 "enable_recv_pipe": true, 00:04:16.800 "enable_quickack": false, 00:04:16.800 "enable_placement_id": 0, 00:04:16.800 "enable_zerocopy_send_server": true, 00:04:16.800 "enable_zerocopy_send_client": false, 00:04:16.800 "zerocopy_threshold": 0, 00:04:16.800 "tls_version": 0, 00:04:16.800 "enable_ktls": false 00:04:16.800 } 00:04:16.800 } 00:04:16.800 ] 00:04:16.800 }, 00:04:16.800 { 00:04:16.800 "subsystem": "vmd", 00:04:16.800 "config": [] 00:04:16.800 }, 00:04:16.800 { 00:04:16.801 "subsystem": "accel", 00:04:16.801 "config": [ 00:04:16.801 { 00:04:16.801 "method": "accel_set_options", 00:04:16.801 "params": { 00:04:16.801 "small_cache_size": 128, 00:04:16.801 "large_cache_size": 16, 00:04:16.801 "task_count": 2048, 00:04:16.801 "sequence_count": 2048, 00:04:16.801 "buf_count": 2048 00:04:16.801 } 00:04:16.801 } 00:04:16.801 ] 00:04:16.801 }, 00:04:16.801 { 00:04:16.801 "subsystem": "bdev", 00:04:16.801 "config": [ 00:04:16.801 { 00:04:16.801 "method": "bdev_set_options", 00:04:16.801 "params": { 00:04:16.801 "bdev_io_pool_size": 65535, 00:04:16.801 "bdev_io_cache_size": 256, 00:04:16.801 "bdev_auto_examine": true, 00:04:16.801 "iobuf_small_cache_size": 128, 00:04:16.801 "iobuf_large_cache_size": 16 00:04:16.801 } 00:04:16.801 }, 00:04:16.801 { 00:04:16.801 "method": "bdev_raid_set_options", 00:04:16.801 "params": { 00:04:16.801 "process_window_size_kb": 1024, 00:04:16.801 "process_max_bandwidth_mb_sec": 0 00:04:16.801 } 00:04:16.801 }, 00:04:16.801 { 00:04:16.801 "method": "bdev_iscsi_set_options", 00:04:16.801 "params": { 00:04:16.801 "timeout_sec": 30 00:04:16.801 } 00:04:16.801 }, 00:04:16.801 { 00:04:16.801 "method": "bdev_nvme_set_options", 00:04:16.801 "params": { 00:04:16.801 "action_on_timeout": "none", 00:04:16.801 "timeout_us": 0, 00:04:16.801 "timeout_admin_us": 0, 00:04:16.801 "keep_alive_timeout_ms": 10000, 00:04:16.801 "arbitration_burst": 0, 00:04:16.801 "low_priority_weight": 0, 00:04:16.801 "medium_priority_weight": 0, 00:04:16.801 "high_priority_weight": 0, 00:04:16.801 "nvme_adminq_poll_period_us": 10000, 00:04:16.801 "nvme_ioq_poll_period_us": 0, 00:04:16.801 "io_queue_requests": 0, 00:04:16.801 "delay_cmd_submit": true, 00:04:16.801 "transport_retry_count": 4, 00:04:16.801 "bdev_retry_count": 3, 00:04:16.801 "transport_ack_timeout": 0, 00:04:16.801 "ctrlr_loss_timeout_sec": 0, 00:04:16.801 "reconnect_delay_sec": 0, 00:04:16.801 "fast_io_fail_timeout_sec": 0, 00:04:16.801 "disable_auto_failback": false, 00:04:16.801 "generate_uuids": false, 00:04:16.801 "transport_tos": 0, 00:04:16.801 "nvme_error_stat": false, 00:04:16.801 "rdma_srq_size": 0, 00:04:16.801 "io_path_stat": false, 00:04:16.801 "allow_accel_sequence": false, 00:04:16.801 "rdma_max_cq_size": 0, 00:04:16.801 "rdma_cm_event_timeout_ms": 0, 00:04:16.801 "dhchap_digests": [ 00:04:16.801 "sha256", 00:04:16.801 "sha384", 00:04:16.801 "sha512" 00:04:16.801 ], 00:04:16.801 "dhchap_dhgroups": [ 00:04:16.801 "null", 00:04:16.801 "ffdhe2048", 00:04:16.801 "ffdhe3072", 00:04:16.801 "ffdhe4096", 00:04:16.801 "ffdhe6144", 00:04:16.801 "ffdhe8192" 00:04:16.801 ] 00:04:16.801 } 00:04:16.801 }, 00:04:16.801 { 00:04:16.801 "method": "bdev_nvme_set_hotplug", 00:04:16.801 "params": { 00:04:16.801 "period_us": 100000, 00:04:16.801 "enable": false 00:04:16.801 } 00:04:16.801 }, 00:04:16.801 { 00:04:16.801 "method": "bdev_wait_for_examine" 00:04:16.801 } 00:04:16.801 ] 00:04:16.801 }, 00:04:16.801 { 00:04:16.801 "subsystem": "scsi", 00:04:16.801 "config": null 00:04:16.801 }, 00:04:16.801 { 00:04:16.801 "subsystem": "scheduler", 00:04:16.801 "config": [ 00:04:16.801 { 00:04:16.801 "method": "framework_set_scheduler", 00:04:16.801 "params": { 00:04:16.801 "name": "static" 00:04:16.801 } 00:04:16.801 } 00:04:16.801 ] 00:04:16.801 }, 00:04:16.801 { 00:04:16.801 "subsystem": "vhost_scsi", 00:04:16.801 "config": [] 00:04:16.801 }, 00:04:16.801 { 00:04:16.801 "subsystem": "vhost_blk", 00:04:16.801 "config": [] 00:04:16.801 }, 00:04:16.801 { 00:04:16.801 "subsystem": "ublk", 00:04:16.801 "config": [] 00:04:16.801 }, 00:04:16.801 { 00:04:16.801 "subsystem": "nbd", 00:04:16.801 "config": [] 00:04:16.801 }, 00:04:16.801 { 00:04:16.801 "subsystem": "nvmf", 00:04:16.801 "config": [ 00:04:16.801 { 00:04:16.801 "method": "nvmf_set_config", 00:04:16.801 "params": { 00:04:16.801 "discovery_filter": "match_any", 00:04:16.801 "admin_cmd_passthru": { 00:04:16.801 "identify_ctrlr": false 00:04:16.801 }, 00:04:16.801 "dhchap_digests": [ 00:04:16.801 "sha256", 00:04:16.801 "sha384", 00:04:16.801 "sha512" 00:04:16.801 ], 00:04:16.801 "dhchap_dhgroups": [ 00:04:16.801 "null", 00:04:16.801 "ffdhe2048", 00:04:16.801 "ffdhe3072", 00:04:16.801 "ffdhe4096", 00:04:16.801 "ffdhe6144", 00:04:16.801 "ffdhe8192" 00:04:16.801 ] 00:04:16.801 } 00:04:16.801 }, 00:04:16.801 { 00:04:16.801 "method": "nvmf_set_max_subsystems", 00:04:16.801 "params": { 00:04:16.801 "max_subsystems": 1024 00:04:16.801 } 00:04:16.801 }, 00:04:16.801 { 00:04:16.801 "method": "nvmf_set_crdt", 00:04:16.801 "params": { 00:04:16.801 "crdt1": 0, 00:04:16.801 "crdt2": 0, 00:04:16.801 "crdt3": 0 00:04:16.801 } 00:04:16.801 }, 00:04:16.801 { 00:04:16.801 "method": "nvmf_create_transport", 00:04:16.801 "params": { 00:04:16.801 "trtype": "TCP", 00:04:16.801 "max_queue_depth": 128, 00:04:16.801 "max_io_qpairs_per_ctrlr": 127, 00:04:16.801 "in_capsule_data_size": 4096, 00:04:16.801 "max_io_size": 131072, 00:04:16.801 "io_unit_size": 131072, 00:04:16.801 "max_aq_depth": 128, 00:04:16.801 "num_shared_buffers": 511, 00:04:16.801 "buf_cache_size": 4294967295, 00:04:16.801 "dif_insert_or_strip": false, 00:04:16.801 "zcopy": false, 00:04:16.801 "c2h_success": true, 00:04:16.801 "sock_priority": 0, 00:04:16.801 "abort_timeout_sec": 1, 00:04:16.801 "ack_timeout": 0, 00:04:16.801 "data_wr_pool_size": 0 00:04:16.801 } 00:04:16.801 } 00:04:16.801 ] 00:04:16.801 }, 00:04:16.801 { 00:04:16.801 "subsystem": "iscsi", 00:04:16.801 "config": [ 00:04:16.801 { 00:04:16.801 "method": "iscsi_set_options", 00:04:16.801 "params": { 00:04:16.801 "node_base": "iqn.2016-06.io.spdk", 00:04:16.801 "max_sessions": 128, 00:04:16.801 "max_connections_per_session": 2, 00:04:16.801 "max_queue_depth": 64, 00:04:16.801 "default_time2wait": 2, 00:04:16.801 "default_time2retain": 20, 00:04:16.801 "first_burst_length": 8192, 00:04:16.801 "immediate_data": true, 00:04:16.801 "allow_duplicated_isid": false, 00:04:16.801 "error_recovery_level": 0, 00:04:16.801 "nop_timeout": 60, 00:04:16.801 "nop_in_interval": 30, 00:04:16.801 "disable_chap": false, 00:04:16.801 "require_chap": false, 00:04:16.801 "mutual_chap": false, 00:04:16.801 "chap_group": 0, 00:04:16.801 "max_large_datain_per_connection": 64, 00:04:16.801 "max_r2t_per_connection": 4, 00:04:16.801 "pdu_pool_size": 36864, 00:04:16.801 "immediate_data_pool_size": 16384, 00:04:16.801 "data_out_pool_size": 2048 00:04:16.801 } 00:04:16.801 } 00:04:16.801 ] 00:04:16.801 } 00:04:16.801 ] 00:04:16.801 } 00:04:16.801 17:52:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:16.801 17:52:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2161476 00:04:16.801 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2161476 ']' 00:04:16.801 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2161476 00:04:16.801 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:16.801 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.801 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2161476 00:04:17.061 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.061 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.061 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2161476' 00:04:17.061 killing process with pid 2161476 00:04:17.061 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2161476 00:04:17.061 17:52:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2161476 00:04:17.325 17:52:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2161760 00:04:17.325 17:52:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:17.325 17:52:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2161760 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2161760 ']' 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2161760 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2161760 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2161760' 00:04:22.599 killing process with pid 2161760 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2161760 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2161760 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:22.599 00:04:22.599 real 0m6.843s 00:04:22.599 user 0m6.632s 00:04:22.599 sys 0m0.738s 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.599 ************************************ 00:04:22.599 END TEST skip_rpc_with_json 00:04:22.599 ************************************ 00:04:22.599 17:52:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:22.599 17:52:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.599 17:52:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.599 17:52:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.599 ************************************ 00:04:22.599 START TEST skip_rpc_with_delay 00:04:22.599 ************************************ 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:22.599 17:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.859 [2024-12-09 17:52:30.628081] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:22.859 17:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:22.859 17:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:22.859 17:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:22.859 17:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:22.859 00:04:22.859 real 0m0.075s 00:04:22.859 user 0m0.040s 00:04:22.859 sys 0m0.035s 00:04:22.859 17:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.859 17:52:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:22.859 ************************************ 00:04:22.859 END TEST skip_rpc_with_delay 00:04:22.859 ************************************ 00:04:22.859 17:52:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:22.859 17:52:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:22.859 17:52:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:22.859 17:52:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.859 17:52:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.859 17:52:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.859 ************************************ 00:04:22.859 START TEST exit_on_failed_rpc_init 00:04:22.859 ************************************ 00:04:22.859 17:52:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:22.859 17:52:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2162869 00:04:22.859 17:52:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2162869 00:04:22.859 17:52:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:22.859 17:52:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2162869 ']' 00:04:22.859 17:52:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.859 17:52:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.859 17:52:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.859 17:52:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.859 17:52:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:22.859 [2024-12-09 17:52:30.788600] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:22.859 [2024-12-09 17:52:30.788648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162869 ] 00:04:23.118 [2024-12-09 17:52:30.875943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.118 [2024-12-09 17:52:30.914659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.686 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.686 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:23.686 17:52:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.686 17:52:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.686 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:23.686 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.686 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.686 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:23.686 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.686 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:23.686 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.686 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:23.686 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.686 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:23.687 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.946 [2024-12-09 17:52:31.687361] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:23.946 [2024-12-09 17:52:31.687413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162972 ] 00:04:23.946 [2024-12-09 17:52:31.776163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.946 [2024-12-09 17:52:31.816365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.946 [2024-12-09 17:52:31.816436] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:23.946 [2024-12-09 17:52:31.816448] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:23.946 [2024-12-09 17:52:31.816455] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:23.946 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:23.946 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:23.946 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:23.946 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:23.946 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:23.946 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:23.946 17:52:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:23.946 17:52:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2162869 00:04:23.946 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2162869 ']' 00:04:23.946 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2162869 00:04:23.946 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:23.946 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.946 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2162869 00:04:24.205 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.205 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.205 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2162869' 00:04:24.205 killing process with pid 2162869 00:04:24.205 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2162869 00:04:24.205 17:52:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2162869 00:04:24.465 00:04:24.465 real 0m1.491s 00:04:24.465 user 0m1.674s 00:04:24.465 sys 0m0.482s 00:04:24.465 17:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.465 17:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.465 ************************************ 00:04:24.465 END TEST exit_on_failed_rpc_init 00:04:24.465 ************************************ 00:04:24.465 17:52:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:24.465 00:04:24.465 real 0m14.346s 00:04:24.465 user 0m13.689s 00:04:24.465 sys 0m1.940s 00:04:24.465 17:52:32 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.465 17:52:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.465 ************************************ 00:04:24.465 END TEST skip_rpc 00:04:24.465 ************************************ 00:04:24.465 17:52:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:24.465 17:52:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.465 17:52:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.465 17:52:32 -- common/autotest_common.sh@10 -- # set +x 00:04:24.465 ************************************ 00:04:24.465 START TEST rpc_client 00:04:24.465 ************************************ 00:04:24.465 17:52:32 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:24.725 * Looking for test storage... 00:04:24.725 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:04:24.725 17:52:32 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:24.725 17:52:32 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:24.725 17:52:32 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:24.725 17:52:32 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.725 17:52:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:24.725 17:52:32 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.725 17:52:32 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:24.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.725 --rc genhtml_branch_coverage=1 00:04:24.725 --rc genhtml_function_coverage=1 00:04:24.725 --rc genhtml_legend=1 00:04:24.725 --rc geninfo_all_blocks=1 00:04:24.725 --rc geninfo_unexecuted_blocks=1 00:04:24.725 00:04:24.725 ' 00:04:24.725 17:52:32 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:24.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.725 --rc genhtml_branch_coverage=1 00:04:24.725 --rc genhtml_function_coverage=1 00:04:24.725 --rc genhtml_legend=1 00:04:24.725 --rc geninfo_all_blocks=1 00:04:24.725 --rc geninfo_unexecuted_blocks=1 00:04:24.725 00:04:24.725 ' 00:04:24.725 17:52:32 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:24.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.725 --rc genhtml_branch_coverage=1 00:04:24.725 --rc genhtml_function_coverage=1 00:04:24.725 --rc genhtml_legend=1 00:04:24.725 --rc geninfo_all_blocks=1 00:04:24.725 --rc geninfo_unexecuted_blocks=1 00:04:24.725 00:04:24.725 ' 00:04:24.725 17:52:32 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:24.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.725 --rc genhtml_branch_coverage=1 00:04:24.725 --rc genhtml_function_coverage=1 00:04:24.725 --rc genhtml_legend=1 00:04:24.725 --rc geninfo_all_blocks=1 00:04:24.725 --rc geninfo_unexecuted_blocks=1 00:04:24.725 00:04:24.725 ' 00:04:24.725 17:52:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:24.725 OK 00:04:24.725 17:52:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:24.725 00:04:24.725 real 0m0.221s 00:04:24.725 user 0m0.122s 00:04:24.725 sys 0m0.116s 00:04:24.725 17:52:32 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.725 17:52:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:24.725 ************************************ 00:04:24.725 END TEST rpc_client 00:04:24.725 ************************************ 00:04:24.725 17:52:32 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:24.725 17:52:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.725 17:52:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.725 17:52:32 -- common/autotest_common.sh@10 -- # set +x 00:04:24.725 ************************************ 00:04:24.725 START TEST json_config 00:04:24.725 ************************************ 00:04:24.725 17:52:32 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:24.985 17:52:32 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:24.985 17:52:32 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:24.985 17:52:32 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:24.985 17:52:32 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:24.985 17:52:32 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.985 17:52:32 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.985 17:52:32 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.985 17:52:32 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.985 17:52:32 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.985 17:52:32 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.985 17:52:32 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.985 17:52:32 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.985 17:52:32 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.985 17:52:32 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.985 17:52:32 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.985 17:52:32 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:24.985 17:52:32 json_config -- scripts/common.sh@345 -- # : 1 00:04:24.985 17:52:32 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.985 17:52:32 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.985 17:52:32 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:24.985 17:52:32 json_config -- scripts/common.sh@353 -- # local d=1 00:04:24.985 17:52:32 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.985 17:52:32 json_config -- scripts/common.sh@355 -- # echo 1 00:04:24.985 17:52:32 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.985 17:52:32 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:24.985 17:52:32 json_config -- scripts/common.sh@353 -- # local d=2 00:04:24.985 17:52:32 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.985 17:52:32 json_config -- scripts/common.sh@355 -- # echo 2 00:04:24.985 17:52:32 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.985 17:52:32 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.985 17:52:32 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.985 17:52:32 json_config -- scripts/common.sh@368 -- # return 0 00:04:24.985 17:52:32 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.985 17:52:32 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:24.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.985 --rc genhtml_branch_coverage=1 00:04:24.985 --rc genhtml_function_coverage=1 00:04:24.985 --rc genhtml_legend=1 00:04:24.985 --rc geninfo_all_blocks=1 00:04:24.985 --rc geninfo_unexecuted_blocks=1 00:04:24.985 00:04:24.985 ' 00:04:24.985 17:52:32 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:24.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.985 --rc genhtml_branch_coverage=1 00:04:24.985 --rc genhtml_function_coverage=1 00:04:24.985 --rc genhtml_legend=1 00:04:24.985 --rc geninfo_all_blocks=1 00:04:24.985 --rc geninfo_unexecuted_blocks=1 00:04:24.985 00:04:24.985 ' 00:04:24.985 17:52:32 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:24.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.985 --rc genhtml_branch_coverage=1 00:04:24.985 --rc genhtml_function_coverage=1 00:04:24.985 --rc genhtml_legend=1 00:04:24.985 --rc geninfo_all_blocks=1 00:04:24.985 --rc geninfo_unexecuted_blocks=1 00:04:24.985 00:04:24.985 ' 00:04:24.985 17:52:32 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:24.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.985 --rc genhtml_branch_coverage=1 00:04:24.985 --rc genhtml_function_coverage=1 00:04:24.985 --rc genhtml_legend=1 00:04:24.985 --rc geninfo_all_blocks=1 00:04:24.985 --rc geninfo_unexecuted_blocks=1 00:04:24.985 00:04:24.985 ' 00:04:24.985 17:52:32 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:24.986 17:52:32 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:24.986 17:52:32 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.986 17:52:32 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.986 17:52:32 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.986 17:52:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.986 17:52:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.986 17:52:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.986 17:52:32 json_config -- paths/export.sh@5 -- # export PATH 00:04:24.986 17:52:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@51 -- # : 0 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:24.986 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:24.986 17:52:32 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:24.986 INFO: JSON configuration test init 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:24.986 17:52:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.986 17:52:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:24.986 17:52:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.986 17:52:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.986 17:52:32 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:24.986 17:52:32 json_config -- json_config/common.sh@9 -- # local app=target 00:04:24.986 17:52:32 json_config -- json_config/common.sh@10 -- # shift 00:04:24.986 17:52:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.986 17:52:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.986 17:52:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.986 17:52:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.986 17:52:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.986 17:52:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2163275 00:04:24.986 17:52:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.986 Waiting for target to run... 00:04:24.986 17:52:32 json_config -- json_config/common.sh@25 -- # waitforlisten 2163275 /var/tmp/spdk_tgt.sock 00:04:24.986 17:52:32 json_config -- common/autotest_common.sh@835 -- # '[' -z 2163275 ']' 00:04:24.986 17:52:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:24.986 17:52:32 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.986 17:52:32 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.986 17:52:32 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.986 17:52:32 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.986 17:52:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.986 [2024-12-09 17:52:32.926364] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:24.986 [2024-12-09 17:52:32.926414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163275 ] 00:04:25.555 [2024-12-09 17:52:33.226364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.555 [2024-12-09 17:52:33.260972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.814 17:52:33 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.814 17:52:33 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:25.814 17:52:33 json_config -- json_config/common.sh@26 -- # echo '' 00:04:25.814 00:04:25.814 17:52:33 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:25.814 17:52:33 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:25.814 17:52:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.814 17:52:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.814 17:52:33 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:25.814 17:52:33 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:25.814 17:52:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.814 17:52:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.074 17:52:33 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:26.074 17:52:33 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:26.074 17:52:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:29.364 17:52:36 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:29.364 17:52:36 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:29.364 17:52:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.364 17:52:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.364 17:52:36 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:29.364 17:52:36 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:29.364 17:52:36 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:29.364 17:52:36 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:29.364 17:52:36 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:29.364 17:52:36 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:29.364 17:52:36 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:29.364 17:52:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:29.364 17:52:37 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:29.364 17:52:37 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:29.364 17:52:37 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:29.364 17:52:37 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:29.364 17:52:37 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:29.364 17:52:37 json_config -- json_config/json_config.sh@54 -- # sort 00:04:29.364 17:52:37 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:29.364 17:52:37 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:29.364 17:52:37 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:29.364 17:52:37 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:29.364 17:52:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:29.364 17:52:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.364 17:52:37 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:29.364 17:52:37 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:29.364 17:52:37 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:29.364 17:52:37 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:29.365 17:52:37 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:29.365 17:52:37 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:29.365 17:52:37 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:29.365 17:52:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.365 17:52:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.365 17:52:37 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:29.365 17:52:37 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:04:29.365 17:52:37 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:04:29.365 17:52:37 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:04:29.365 17:52:37 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:04:29.365 17:52:37 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:29.365 17:52:37 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:29.365 17:52:37 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:29.365 17:52:37 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:29.365 17:52:37 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:29.365 17:52:37 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:04:29.365 17:52:37 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:29.365 17:52:37 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:04:29.365 17:52:37 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:29.365 17:52:37 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:04:29.365 17:52:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@320 -- # e810=() 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@321 -- # x722=() 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@322 -- # mlx=() 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:36.045 17:52:43 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:04:36.045 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:04:36.045 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:04:36.045 17:52:44 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:04:36.046 Found net devices under 0000:d9:00.0: mlx_0_0 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:04:36.046 Found net devices under 0000:d9:00.1: mlx_0_1 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@62 -- # uname 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:04:36.046 17:52:44 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:04:36.304 17:52:44 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:04:36.304 17:52:44 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:04:36.304 17:52:44 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:04:36.304 17:52:44 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:04:36.304 17:52:44 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:04:36.304 17:52:44 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:04:36.304 17:52:44 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:04:36.304 17:52:44 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:04:36.304 17:52:44 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:04:36.304 17:52:44 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:36.304 17:52:44 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:36.304 17:52:44 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:36.304 17:52:44 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:04:36.305 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:36.305 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:04:36.305 altname enp217s0f0np0 00:04:36.305 altname ens818f0np0 00:04:36.305 inet 192.168.100.8/24 scope global mlx_0_0 00:04:36.305 valid_lft forever preferred_lft forever 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:04:36.305 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:36.305 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:04:36.305 altname enp217s0f1np1 00:04:36.305 altname ens818f1np1 00:04:36.305 inet 192.168.100.9/24 scope global mlx_0_1 00:04:36.305 valid_lft forever preferred_lft forever 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@450 -- # return 0 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:04:36.305 192.168.100.9' 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:04:36.305 192.168.100.9' 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@485 -- # head -n 1 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:04:36.305 192.168.100.9' 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@486 -- # head -n 1 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:04:36.305 17:52:44 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:04:36.305 17:52:44 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:04:36.305 17:52:44 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:36.305 17:52:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:36.564 MallocForNvmf0 00:04:36.564 17:52:44 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:36.564 17:52:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:36.823 MallocForNvmf1 00:04:36.823 17:52:44 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:04:36.823 17:52:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:04:37.082 [2024-12-09 17:52:44.813577] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:04:37.082 [2024-12-09 17:52:44.855878] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fd84f0/0x1eacfc0) succeed. 00:04:37.082 [2024-12-09 17:52:44.876335] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fd7530/0x1f2cc80) succeed. 00:04:37.082 17:52:44 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:37.082 17:52:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:37.342 17:52:45 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:37.342 17:52:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:37.342 17:52:45 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:37.342 17:52:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:37.602 17:52:45 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:37.602 17:52:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:37.860 [2024-12-09 17:52:45.685109] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:37.860 17:52:45 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:37.860 17:52:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:37.860 17:52:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.860 17:52:45 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:37.860 17:52:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:37.860 17:52:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.860 17:52:45 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:37.861 17:52:45 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:37.861 17:52:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:38.119 MallocBdevForConfigChangeCheck 00:04:38.119 17:52:45 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:38.119 17:52:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.119 17:52:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.119 17:52:46 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:38.120 17:52:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:38.688 17:52:46 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:38.688 INFO: shutting down applications... 00:04:38.688 17:52:46 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:38.688 17:52:46 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:38.688 17:52:46 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:38.688 17:52:46 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:41.223 Calling clear_iscsi_subsystem 00:04:41.223 Calling clear_nvmf_subsystem 00:04:41.223 Calling clear_nbd_subsystem 00:04:41.223 Calling clear_ublk_subsystem 00:04:41.223 Calling clear_vhost_blk_subsystem 00:04:41.223 Calling clear_vhost_scsi_subsystem 00:04:41.223 Calling clear_bdev_subsystem 00:04:41.223 17:52:48 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:04:41.223 17:52:48 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:41.223 17:52:48 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:41.223 17:52:48 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:41.223 17:52:48 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:41.223 17:52:48 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:41.482 17:52:49 json_config -- json_config/json_config.sh@352 -- # break 00:04:41.482 17:52:49 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:41.482 17:52:49 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:41.482 17:52:49 json_config -- json_config/common.sh@31 -- # local app=target 00:04:41.482 17:52:49 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:41.482 17:52:49 json_config -- json_config/common.sh@35 -- # [[ -n 2163275 ]] 00:04:41.482 17:52:49 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2163275 00:04:41.482 17:52:49 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:41.482 17:52:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.482 17:52:49 json_config -- json_config/common.sh@41 -- # kill -0 2163275 00:04:41.482 17:52:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.052 17:52:49 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.052 17:52:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.052 17:52:49 json_config -- json_config/common.sh@41 -- # kill -0 2163275 00:04:42.052 17:52:49 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:42.052 17:52:49 json_config -- json_config/common.sh@43 -- # break 00:04:42.052 17:52:49 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:42.052 17:52:49 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:42.052 SPDK target shutdown done 00:04:42.052 17:52:49 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:42.052 INFO: relaunching applications... 00:04:42.052 17:52:49 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.052 17:52:49 json_config -- json_config/common.sh@9 -- # local app=target 00:04:42.052 17:52:49 json_config -- json_config/common.sh@10 -- # shift 00:04:42.052 17:52:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.052 17:52:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.052 17:52:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.052 17:52:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.052 17:52:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.052 17:52:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2168382 00:04:42.052 17:52:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.052 Waiting for target to run... 00:04:42.052 17:52:49 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.052 17:52:49 json_config -- json_config/common.sh@25 -- # waitforlisten 2168382 /var/tmp/spdk_tgt.sock 00:04:42.052 17:52:49 json_config -- common/autotest_common.sh@835 -- # '[' -z 2168382 ']' 00:04:42.052 17:52:49 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.052 17:52:49 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.052 17:52:49 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.052 17:52:49 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.052 17:52:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.052 [2024-12-09 17:52:49.870789] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:42.052 [2024-12-09 17:52:49.870857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168382 ] 00:04:42.620 [2024-12-09 17:52:50.326349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.620 [2024-12-09 17:52:50.377741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.911 [2024-12-09 17:52:53.451437] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x120fde0/0x11beb80) succeed. 00:04:45.911 [2024-12-09 17:52:53.462616] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1213030/0x123ebc0) succeed. 00:04:45.911 [2024-12-09 17:52:53.511319] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:46.170 17:52:54 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.170 17:52:54 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:46.170 17:52:54 json_config -- json_config/common.sh@26 -- # echo '' 00:04:46.170 00:04:46.170 17:52:54 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:46.170 17:52:54 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:46.170 INFO: Checking if target configuration is the same... 00:04:46.170 17:52:54 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:46.170 17:52:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:46.171 17:52:54 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.171 + '[' 2 -ne 2 ']' 00:04:46.171 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:46.171 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:46.171 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:46.171 +++ basename /dev/fd/62 00:04:46.171 ++ mktemp /tmp/62.XXX 00:04:46.171 + tmp_file_1=/tmp/62.Lyi 00:04:46.171 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.171 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:46.171 + tmp_file_2=/tmp/spdk_tgt_config.json.WT4 00:04:46.171 + ret=0 00:04:46.171 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:46.737 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:46.737 + diff -u /tmp/62.Lyi /tmp/spdk_tgt_config.json.WT4 00:04:46.737 + echo 'INFO: JSON config files are the same' 00:04:46.737 INFO: JSON config files are the same 00:04:46.737 + rm /tmp/62.Lyi /tmp/spdk_tgt_config.json.WT4 00:04:46.737 + exit 0 00:04:46.737 17:52:54 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:46.737 17:52:54 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:46.737 INFO: changing configuration and checking if this can be detected... 00:04:46.737 17:52:54 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:46.737 17:52:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:46.737 17:52:54 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.737 17:52:54 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:46.737 17:52:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:46.737 + '[' 2 -ne 2 ']' 00:04:46.737 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:46.737 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:46.737 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:46.737 +++ basename /dev/fd/62 00:04:46.737 ++ mktemp /tmp/62.XXX 00:04:46.737 + tmp_file_1=/tmp/62.kIX 00:04:46.737 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.737 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:46.737 + tmp_file_2=/tmp/spdk_tgt_config.json.I0d 00:04:46.737 + ret=0 00:04:46.737 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:47.305 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:47.305 + diff -u /tmp/62.kIX /tmp/spdk_tgt_config.json.I0d 00:04:47.305 + ret=1 00:04:47.305 + echo '=== Start of file: /tmp/62.kIX ===' 00:04:47.305 + cat /tmp/62.kIX 00:04:47.305 + echo '=== End of file: /tmp/62.kIX ===' 00:04:47.305 + echo '' 00:04:47.305 + echo '=== Start of file: /tmp/spdk_tgt_config.json.I0d ===' 00:04:47.305 + cat /tmp/spdk_tgt_config.json.I0d 00:04:47.305 + echo '=== End of file: /tmp/spdk_tgt_config.json.I0d ===' 00:04:47.305 + echo '' 00:04:47.305 + rm /tmp/62.kIX /tmp/spdk_tgt_config.json.I0d 00:04:47.305 + exit 1 00:04:47.305 17:52:55 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:47.305 INFO: configuration change detected. 00:04:47.305 17:52:55 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:47.305 17:52:55 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:47.305 17:52:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:47.305 17:52:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.305 17:52:55 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:47.305 17:52:55 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:47.305 17:52:55 json_config -- json_config/json_config.sh@324 -- # [[ -n 2168382 ]] 00:04:47.305 17:52:55 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:47.305 17:52:55 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:47.305 17:52:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:47.305 17:52:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.305 17:52:55 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:47.305 17:52:55 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:47.305 17:52:55 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:47.305 17:52:55 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:47.305 17:52:55 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:47.305 17:52:55 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:47.305 17:52:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:47.305 17:52:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.305 17:52:55 json_config -- json_config/json_config.sh@330 -- # killprocess 2168382 00:04:47.305 17:52:55 json_config -- common/autotest_common.sh@954 -- # '[' -z 2168382 ']' 00:04:47.305 17:52:55 json_config -- common/autotest_common.sh@958 -- # kill -0 2168382 00:04:47.305 17:52:55 json_config -- common/autotest_common.sh@959 -- # uname 00:04:47.305 17:52:55 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.305 17:52:55 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2168382 00:04:47.305 17:52:55 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.305 17:52:55 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.305 17:52:55 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2168382' 00:04:47.305 killing process with pid 2168382 00:04:47.305 17:52:55 json_config -- common/autotest_common.sh@973 -- # kill 2168382 00:04:47.305 17:52:55 json_config -- common/autotest_common.sh@978 -- # wait 2168382 00:04:49.839 17:52:57 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:49.839 17:52:57 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:49.839 17:52:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:49.839 17:52:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.839 17:52:57 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:49.839 17:52:57 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:49.839 INFO: Success 00:04:49.839 17:52:57 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:04:49.839 17:52:57 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:49.839 17:52:57 json_config -- nvmf/common.sh@121 -- # sync 00:04:49.839 17:52:57 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:04:49.839 17:52:57 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:04:49.839 17:52:57 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:04:49.839 17:52:57 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:49.839 17:52:57 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:04:49.839 00:04:49.839 real 0m25.106s 00:04:49.839 user 0m27.599s 00:04:49.839 sys 0m8.100s 00:04:49.839 17:52:57 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.839 17:52:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.839 ************************************ 00:04:49.839 END TEST json_config 00:04:49.839 ************************************ 00:04:49.839 17:52:57 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:49.839 17:52:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.839 17:52:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.839 17:52:57 -- common/autotest_common.sh@10 -- # set +x 00:04:50.098 ************************************ 00:04:50.098 START TEST json_config_extra_key 00:04:50.098 ************************************ 00:04:50.098 17:52:57 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:50.098 17:52:57 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:50.098 17:52:57 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:50.098 17:52:57 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:50.098 17:52:57 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:50.098 17:52:57 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.098 17:52:57 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.098 17:52:57 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.098 17:52:57 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.098 17:52:57 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.098 17:52:57 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.098 17:52:57 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.098 17:52:57 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.098 17:52:57 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.098 17:52:57 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.098 17:52:57 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.098 17:52:57 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:50.098 17:52:57 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:50.098 17:52:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.098 17:52:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.098 17:52:57 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:50.098 17:52:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:50.098 17:52:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.098 17:52:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:50.098 17:52:58 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.098 17:52:58 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:50.098 17:52:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:50.098 17:52:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.098 17:52:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:50.098 17:52:58 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.098 17:52:58 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.098 17:52:58 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.098 17:52:58 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:50.098 17:52:58 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.098 17:52:58 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:50.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.098 --rc genhtml_branch_coverage=1 00:04:50.098 --rc genhtml_function_coverage=1 00:04:50.098 --rc genhtml_legend=1 00:04:50.098 --rc geninfo_all_blocks=1 00:04:50.098 --rc geninfo_unexecuted_blocks=1 00:04:50.098 00:04:50.098 ' 00:04:50.098 17:52:58 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:50.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.098 --rc genhtml_branch_coverage=1 00:04:50.098 --rc genhtml_function_coverage=1 00:04:50.098 --rc genhtml_legend=1 00:04:50.098 --rc geninfo_all_blocks=1 00:04:50.098 --rc geninfo_unexecuted_blocks=1 00:04:50.098 00:04:50.098 ' 00:04:50.098 17:52:58 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:50.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.098 --rc genhtml_branch_coverage=1 00:04:50.098 --rc genhtml_function_coverage=1 00:04:50.098 --rc genhtml_legend=1 00:04:50.098 --rc geninfo_all_blocks=1 00:04:50.098 --rc geninfo_unexecuted_blocks=1 00:04:50.098 00:04:50.098 ' 00:04:50.098 17:52:58 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:50.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.098 --rc genhtml_branch_coverage=1 00:04:50.098 --rc genhtml_function_coverage=1 00:04:50.098 --rc genhtml_legend=1 00:04:50.098 --rc geninfo_all_blocks=1 00:04:50.098 --rc geninfo_unexecuted_blocks=1 00:04:50.098 00:04:50.098 ' 00:04:50.098 17:52:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:50.098 17:52:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:50.098 17:52:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.098 17:52:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.098 17:52:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.098 17:52:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.098 17:52:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.098 17:52:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.098 17:52:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.098 17:52:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.098 17:52:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.098 17:52:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.098 17:52:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:04:50.098 17:52:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:04:50.098 17:52:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.098 17:52:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.099 17:52:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.099 17:52:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.099 17:52:58 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:50.099 17:52:58 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.099 17:52:58 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.099 17:52:58 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.099 17:52:58 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.099 17:52:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.099 17:52:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.099 17:52:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.099 17:52:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:50.099 17:52:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.099 17:52:58 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:50.099 17:52:58 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.099 17:52:58 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.099 17:52:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.099 17:52:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.099 17:52:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.099 17:52:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.099 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.099 17:52:58 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.099 17:52:58 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.099 17:52:58 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:50.099 17:52:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:50.099 17:52:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:50.099 17:52:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:50.099 17:52:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:50.099 17:52:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:50.099 17:52:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:50.099 17:52:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:50.099 17:52:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:50.099 17:52:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:50.099 17:52:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:50.099 17:52:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:50.099 INFO: launching applications... 00:04:50.099 17:52:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:50.099 17:52:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:50.099 17:52:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:50.099 17:52:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:50.099 17:52:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:50.099 17:52:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:50.099 17:52:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.099 17:52:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.099 17:52:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2170043 00:04:50.099 17:52:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:50.099 Waiting for target to run... 00:04:50.099 17:52:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2170043 /var/tmp/spdk_tgt.sock 00:04:50.099 17:52:58 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:50.099 17:52:58 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2170043 ']' 00:04:50.099 17:52:58 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.099 17:52:58 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.099 17:52:58 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.099 17:52:58 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.099 17:52:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:50.359 [2024-12-09 17:52:58.102435] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:50.359 [2024-12-09 17:52:58.102491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170043 ] 00:04:50.618 [2024-12-09 17:52:58.409141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.618 [2024-12-09 17:52:58.441312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.186 17:52:58 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.186 17:52:58 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:51.186 17:52:58 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:51.186 00:04:51.186 17:52:58 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:51.186 INFO: shutting down applications... 00:04:51.186 17:52:58 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:51.186 17:52:58 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:51.186 17:52:58 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:51.186 17:52:58 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2170043 ]] 00:04:51.186 17:52:58 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2170043 00:04:51.186 17:52:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:51.186 17:52:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.186 17:52:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2170043 00:04:51.186 17:52:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.755 17:52:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.755 17:52:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.755 17:52:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2170043 00:04:51.755 17:52:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:51.755 17:52:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:51.755 17:52:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:51.755 17:52:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:51.755 SPDK target shutdown done 00:04:51.755 17:52:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:51.755 Success 00:04:51.755 00:04:51.755 real 0m1.586s 00:04:51.755 user 0m1.297s 00:04:51.755 sys 0m0.464s 00:04:51.755 17:52:59 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.755 17:52:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.756 ************************************ 00:04:51.756 END TEST json_config_extra_key 00:04:51.756 ************************************ 00:04:51.756 17:52:59 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.756 17:52:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.756 17:52:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.756 17:52:59 -- common/autotest_common.sh@10 -- # set +x 00:04:51.756 ************************************ 00:04:51.756 START TEST alias_rpc 00:04:51.756 ************************************ 00:04:51.756 17:52:59 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.756 * Looking for test storage... 00:04:51.756 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:04:51.756 17:52:59 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.756 17:52:59 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.756 17:52:59 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.756 17:52:59 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.756 17:52:59 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:51.756 17:52:59 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.756 17:52:59 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.756 --rc genhtml_branch_coverage=1 00:04:51.756 --rc genhtml_function_coverage=1 00:04:51.756 --rc genhtml_legend=1 00:04:51.756 --rc geninfo_all_blocks=1 00:04:51.756 --rc geninfo_unexecuted_blocks=1 00:04:51.756 00:04:51.756 ' 00:04:51.756 17:52:59 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.756 --rc genhtml_branch_coverage=1 00:04:51.756 --rc genhtml_function_coverage=1 00:04:51.756 --rc genhtml_legend=1 00:04:51.756 --rc geninfo_all_blocks=1 00:04:51.756 --rc geninfo_unexecuted_blocks=1 00:04:51.756 00:04:51.756 ' 00:04:51.756 17:52:59 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.756 --rc genhtml_branch_coverage=1 00:04:51.756 --rc genhtml_function_coverage=1 00:04:51.756 --rc genhtml_legend=1 00:04:51.756 --rc geninfo_all_blocks=1 00:04:51.756 --rc geninfo_unexecuted_blocks=1 00:04:51.756 00:04:51.756 ' 00:04:51.756 17:52:59 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.756 --rc genhtml_branch_coverage=1 00:04:51.756 --rc genhtml_function_coverage=1 00:04:51.756 --rc genhtml_legend=1 00:04:51.756 --rc geninfo_all_blocks=1 00:04:51.756 --rc geninfo_unexecuted_blocks=1 00:04:51.756 00:04:51.756 ' 00:04:51.756 17:52:59 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:51.756 17:52:59 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2170421 00:04:51.756 17:52:59 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2170421 00:04:51.756 17:52:59 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.756 17:52:59 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2170421 ']' 00:04:51.756 17:52:59 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.756 17:52:59 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.756 17:52:59 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.756 17:52:59 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.756 17:52:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.015 [2024-12-09 17:52:59.766040] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:52.015 [2024-12-09 17:52:59.766094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170421 ] 00:04:52.016 [2024-12-09 17:52:59.853333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.016 [2024-12-09 17:52:59.893777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.953 17:53:00 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.953 17:53:00 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:52.953 17:53:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:52.953 17:53:00 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2170421 00:04:52.953 17:53:00 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2170421 ']' 00:04:52.953 17:53:00 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2170421 00:04:52.953 17:53:00 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:52.953 17:53:00 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.953 17:53:00 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2170421 00:04:52.953 17:53:00 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.953 17:53:00 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.953 17:53:00 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2170421' 00:04:52.953 killing process with pid 2170421 00:04:52.953 17:53:00 alias_rpc -- common/autotest_common.sh@973 -- # kill 2170421 00:04:52.953 17:53:00 alias_rpc -- common/autotest_common.sh@978 -- # wait 2170421 00:04:53.211 00:04:53.211 real 0m1.639s 00:04:53.211 user 0m1.749s 00:04:53.211 sys 0m0.497s 00:04:53.211 17:53:01 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.211 17:53:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.211 ************************************ 00:04:53.211 END TEST alias_rpc 00:04:53.211 ************************************ 00:04:53.471 17:53:01 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:53.471 17:53:01 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:53.471 17:53:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.471 17:53:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.471 17:53:01 -- common/autotest_common.sh@10 -- # set +x 00:04:53.471 ************************************ 00:04:53.471 START TEST spdkcli_tcp 00:04:53.471 ************************************ 00:04:53.471 17:53:01 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:53.471 * Looking for test storage... 00:04:53.471 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:04:53.471 17:53:01 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.471 17:53:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.471 17:53:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.471 17:53:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.471 17:53:01 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:53.471 17:53:01 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.471 17:53:01 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.471 --rc genhtml_branch_coverage=1 00:04:53.471 --rc genhtml_function_coverage=1 00:04:53.471 --rc genhtml_legend=1 00:04:53.471 --rc geninfo_all_blocks=1 00:04:53.471 --rc geninfo_unexecuted_blocks=1 00:04:53.471 00:04:53.471 ' 00:04:53.471 17:53:01 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.471 --rc genhtml_branch_coverage=1 00:04:53.471 --rc genhtml_function_coverage=1 00:04:53.471 --rc genhtml_legend=1 00:04:53.471 --rc geninfo_all_blocks=1 00:04:53.471 --rc geninfo_unexecuted_blocks=1 00:04:53.471 00:04:53.471 ' 00:04:53.471 17:53:01 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.471 --rc genhtml_branch_coverage=1 00:04:53.471 --rc genhtml_function_coverage=1 00:04:53.471 --rc genhtml_legend=1 00:04:53.471 --rc geninfo_all_blocks=1 00:04:53.471 --rc geninfo_unexecuted_blocks=1 00:04:53.471 00:04:53.471 ' 00:04:53.471 17:53:01 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.471 --rc genhtml_branch_coverage=1 00:04:53.471 --rc genhtml_function_coverage=1 00:04:53.471 --rc genhtml_legend=1 00:04:53.471 --rc geninfo_all_blocks=1 00:04:53.471 --rc geninfo_unexecuted_blocks=1 00:04:53.471 00:04:53.471 ' 00:04:53.471 17:53:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:04:53.471 17:53:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:53.471 17:53:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:04:53.471 17:53:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:53.471 17:53:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:53.471 17:53:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:53.471 17:53:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:53.471 17:53:01 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.471 17:53:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.471 17:53:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2170775 00:04:53.471 17:53:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:53.471 17:53:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2170775 00:04:53.471 17:53:01 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2170775 ']' 00:04:53.471 17:53:01 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.471 17:53:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.471 17:53:01 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.471 17:53:01 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.471 17:53:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.730 [2024-12-09 17:53:01.489581] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:53.730 [2024-12-09 17:53:01.489634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170775 ] 00:04:53.730 [2024-12-09 17:53:01.577292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.730 [2024-12-09 17:53:01.616292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.730 [2024-12-09 17:53:01.616292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.667 17:53:02 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.667 17:53:02 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:54.667 17:53:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:54.667 17:53:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2170798 00:04:54.667 17:53:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:54.667 [ 00:04:54.667 "bdev_malloc_delete", 00:04:54.667 "bdev_malloc_create", 00:04:54.667 "bdev_null_resize", 00:04:54.667 "bdev_null_delete", 00:04:54.667 "bdev_null_create", 00:04:54.667 "bdev_nvme_cuse_unregister", 00:04:54.667 "bdev_nvme_cuse_register", 00:04:54.667 "bdev_opal_new_user", 00:04:54.667 "bdev_opal_set_lock_state", 00:04:54.667 "bdev_opal_delete", 00:04:54.667 "bdev_opal_get_info", 00:04:54.667 "bdev_opal_create", 00:04:54.667 "bdev_nvme_opal_revert", 00:04:54.667 "bdev_nvme_opal_init", 00:04:54.667 "bdev_nvme_send_cmd", 00:04:54.667 "bdev_nvme_set_keys", 00:04:54.667 "bdev_nvme_get_path_iostat", 00:04:54.667 "bdev_nvme_get_mdns_discovery_info", 00:04:54.667 "bdev_nvme_stop_mdns_discovery", 00:04:54.667 "bdev_nvme_start_mdns_discovery", 00:04:54.667 "bdev_nvme_set_multipath_policy", 00:04:54.667 "bdev_nvme_set_preferred_path", 00:04:54.667 "bdev_nvme_get_io_paths", 00:04:54.667 "bdev_nvme_remove_error_injection", 00:04:54.667 "bdev_nvme_add_error_injection", 00:04:54.667 "bdev_nvme_get_discovery_info", 00:04:54.667 "bdev_nvme_stop_discovery", 00:04:54.667 "bdev_nvme_start_discovery", 00:04:54.667 "bdev_nvme_get_controller_health_info", 00:04:54.667 "bdev_nvme_disable_controller", 00:04:54.667 "bdev_nvme_enable_controller", 00:04:54.667 "bdev_nvme_reset_controller", 00:04:54.667 "bdev_nvme_get_transport_statistics", 00:04:54.667 "bdev_nvme_apply_firmware", 00:04:54.667 "bdev_nvme_detach_controller", 00:04:54.667 "bdev_nvme_get_controllers", 00:04:54.667 "bdev_nvme_attach_controller", 00:04:54.667 "bdev_nvme_set_hotplug", 00:04:54.667 "bdev_nvme_set_options", 00:04:54.667 "bdev_passthru_delete", 00:04:54.667 "bdev_passthru_create", 00:04:54.667 "bdev_lvol_set_parent_bdev", 00:04:54.667 "bdev_lvol_set_parent", 00:04:54.667 "bdev_lvol_check_shallow_copy", 00:04:54.667 "bdev_lvol_start_shallow_copy", 00:04:54.667 "bdev_lvol_grow_lvstore", 00:04:54.667 "bdev_lvol_get_lvols", 00:04:54.667 "bdev_lvol_get_lvstores", 00:04:54.667 "bdev_lvol_delete", 00:04:54.667 "bdev_lvol_set_read_only", 00:04:54.667 "bdev_lvol_resize", 00:04:54.667 "bdev_lvol_decouple_parent", 00:04:54.667 "bdev_lvol_inflate", 00:04:54.667 "bdev_lvol_rename", 00:04:54.667 "bdev_lvol_clone_bdev", 00:04:54.667 "bdev_lvol_clone", 00:04:54.667 "bdev_lvol_snapshot", 00:04:54.667 "bdev_lvol_create", 00:04:54.667 "bdev_lvol_delete_lvstore", 00:04:54.667 "bdev_lvol_rename_lvstore", 00:04:54.667 "bdev_lvol_create_lvstore", 00:04:54.667 "bdev_raid_set_options", 00:04:54.667 "bdev_raid_remove_base_bdev", 00:04:54.667 "bdev_raid_add_base_bdev", 00:04:54.667 "bdev_raid_delete", 00:04:54.667 "bdev_raid_create", 00:04:54.667 "bdev_raid_get_bdevs", 00:04:54.667 "bdev_error_inject_error", 00:04:54.667 "bdev_error_delete", 00:04:54.667 "bdev_error_create", 00:04:54.667 "bdev_split_delete", 00:04:54.667 "bdev_split_create", 00:04:54.667 "bdev_delay_delete", 00:04:54.667 "bdev_delay_create", 00:04:54.667 "bdev_delay_update_latency", 00:04:54.667 "bdev_zone_block_delete", 00:04:54.667 "bdev_zone_block_create", 00:04:54.667 "blobfs_create", 00:04:54.667 "blobfs_detect", 00:04:54.667 "blobfs_set_cache_size", 00:04:54.667 "bdev_aio_delete", 00:04:54.667 "bdev_aio_rescan", 00:04:54.667 "bdev_aio_create", 00:04:54.667 "bdev_ftl_set_property", 00:04:54.667 "bdev_ftl_get_properties", 00:04:54.667 "bdev_ftl_get_stats", 00:04:54.667 "bdev_ftl_unmap", 00:04:54.667 "bdev_ftl_unload", 00:04:54.667 "bdev_ftl_delete", 00:04:54.667 "bdev_ftl_load", 00:04:54.667 "bdev_ftl_create", 00:04:54.667 "bdev_virtio_attach_controller", 00:04:54.667 "bdev_virtio_scsi_get_devices", 00:04:54.667 "bdev_virtio_detach_controller", 00:04:54.667 "bdev_virtio_blk_set_hotplug", 00:04:54.667 "bdev_iscsi_delete", 00:04:54.667 "bdev_iscsi_create", 00:04:54.667 "bdev_iscsi_set_options", 00:04:54.667 "accel_error_inject_error", 00:04:54.667 "ioat_scan_accel_module", 00:04:54.667 "dsa_scan_accel_module", 00:04:54.667 "iaa_scan_accel_module", 00:04:54.667 "keyring_file_remove_key", 00:04:54.667 "keyring_file_add_key", 00:04:54.667 "keyring_linux_set_options", 00:04:54.667 "fsdev_aio_delete", 00:04:54.667 "fsdev_aio_create", 00:04:54.667 "iscsi_get_histogram", 00:04:54.667 "iscsi_enable_histogram", 00:04:54.667 "iscsi_set_options", 00:04:54.667 "iscsi_get_auth_groups", 00:04:54.667 "iscsi_auth_group_remove_secret", 00:04:54.667 "iscsi_auth_group_add_secret", 00:04:54.667 "iscsi_delete_auth_group", 00:04:54.667 "iscsi_create_auth_group", 00:04:54.667 "iscsi_set_discovery_auth", 00:04:54.667 "iscsi_get_options", 00:04:54.667 "iscsi_target_node_request_logout", 00:04:54.667 "iscsi_target_node_set_redirect", 00:04:54.667 "iscsi_target_node_set_auth", 00:04:54.667 "iscsi_target_node_add_lun", 00:04:54.667 "iscsi_get_stats", 00:04:54.667 "iscsi_get_connections", 00:04:54.667 "iscsi_portal_group_set_auth", 00:04:54.667 "iscsi_start_portal_group", 00:04:54.667 "iscsi_delete_portal_group", 00:04:54.667 "iscsi_create_portal_group", 00:04:54.667 "iscsi_get_portal_groups", 00:04:54.667 "iscsi_delete_target_node", 00:04:54.667 "iscsi_target_node_remove_pg_ig_maps", 00:04:54.668 "iscsi_target_node_add_pg_ig_maps", 00:04:54.668 "iscsi_create_target_node", 00:04:54.668 "iscsi_get_target_nodes", 00:04:54.668 "iscsi_delete_initiator_group", 00:04:54.668 "iscsi_initiator_group_remove_initiators", 00:04:54.668 "iscsi_initiator_group_add_initiators", 00:04:54.668 "iscsi_create_initiator_group", 00:04:54.668 "iscsi_get_initiator_groups", 00:04:54.668 "nvmf_set_crdt", 00:04:54.668 "nvmf_set_config", 00:04:54.668 "nvmf_set_max_subsystems", 00:04:54.668 "nvmf_stop_mdns_prr", 00:04:54.668 "nvmf_publish_mdns_prr", 00:04:54.668 "nvmf_subsystem_get_listeners", 00:04:54.668 "nvmf_subsystem_get_qpairs", 00:04:54.668 "nvmf_subsystem_get_controllers", 00:04:54.668 "nvmf_get_stats", 00:04:54.668 "nvmf_get_transports", 00:04:54.668 "nvmf_create_transport", 00:04:54.668 "nvmf_get_targets", 00:04:54.668 "nvmf_delete_target", 00:04:54.668 "nvmf_create_target", 00:04:54.668 "nvmf_subsystem_allow_any_host", 00:04:54.668 "nvmf_subsystem_set_keys", 00:04:54.668 "nvmf_subsystem_remove_host", 00:04:54.668 "nvmf_subsystem_add_host", 00:04:54.668 "nvmf_ns_remove_host", 00:04:54.668 "nvmf_ns_add_host", 00:04:54.668 "nvmf_subsystem_remove_ns", 00:04:54.668 "nvmf_subsystem_set_ns_ana_group", 00:04:54.668 "nvmf_subsystem_add_ns", 00:04:54.668 "nvmf_subsystem_listener_set_ana_state", 00:04:54.668 "nvmf_discovery_get_referrals", 00:04:54.668 "nvmf_discovery_remove_referral", 00:04:54.668 "nvmf_discovery_add_referral", 00:04:54.668 "nvmf_subsystem_remove_listener", 00:04:54.668 "nvmf_subsystem_add_listener", 00:04:54.668 "nvmf_delete_subsystem", 00:04:54.668 "nvmf_create_subsystem", 00:04:54.668 "nvmf_get_subsystems", 00:04:54.668 "env_dpdk_get_mem_stats", 00:04:54.668 "nbd_get_disks", 00:04:54.668 "nbd_stop_disk", 00:04:54.668 "nbd_start_disk", 00:04:54.668 "ublk_recover_disk", 00:04:54.668 "ublk_get_disks", 00:04:54.668 "ublk_stop_disk", 00:04:54.668 "ublk_start_disk", 00:04:54.668 "ublk_destroy_target", 00:04:54.668 "ublk_create_target", 00:04:54.668 "virtio_blk_create_transport", 00:04:54.668 "virtio_blk_get_transports", 00:04:54.668 "vhost_controller_set_coalescing", 00:04:54.668 "vhost_get_controllers", 00:04:54.668 "vhost_delete_controller", 00:04:54.668 "vhost_create_blk_controller", 00:04:54.668 "vhost_scsi_controller_remove_target", 00:04:54.668 "vhost_scsi_controller_add_target", 00:04:54.668 "vhost_start_scsi_controller", 00:04:54.668 "vhost_create_scsi_controller", 00:04:54.668 "thread_set_cpumask", 00:04:54.668 "scheduler_set_options", 00:04:54.668 "framework_get_governor", 00:04:54.668 "framework_get_scheduler", 00:04:54.668 "framework_set_scheduler", 00:04:54.668 "framework_get_reactors", 00:04:54.668 "thread_get_io_channels", 00:04:54.668 "thread_get_pollers", 00:04:54.668 "thread_get_stats", 00:04:54.668 "framework_monitor_context_switch", 00:04:54.668 "spdk_kill_instance", 00:04:54.668 "log_enable_timestamps", 00:04:54.668 "log_get_flags", 00:04:54.668 "log_clear_flag", 00:04:54.668 "log_set_flag", 00:04:54.668 "log_get_level", 00:04:54.668 "log_set_level", 00:04:54.668 "log_get_print_level", 00:04:54.668 "log_set_print_level", 00:04:54.668 "framework_enable_cpumask_locks", 00:04:54.668 "framework_disable_cpumask_locks", 00:04:54.668 "framework_wait_init", 00:04:54.668 "framework_start_init", 00:04:54.668 "scsi_get_devices", 00:04:54.668 "bdev_get_histogram", 00:04:54.668 "bdev_enable_histogram", 00:04:54.668 "bdev_set_qos_limit", 00:04:54.668 "bdev_set_qd_sampling_period", 00:04:54.668 "bdev_get_bdevs", 00:04:54.668 "bdev_reset_iostat", 00:04:54.668 "bdev_get_iostat", 00:04:54.668 "bdev_examine", 00:04:54.668 "bdev_wait_for_examine", 00:04:54.668 "bdev_set_options", 00:04:54.668 "accel_get_stats", 00:04:54.668 "accel_set_options", 00:04:54.668 "accel_set_driver", 00:04:54.668 "accel_crypto_key_destroy", 00:04:54.668 "accel_crypto_keys_get", 00:04:54.668 "accel_crypto_key_create", 00:04:54.668 "accel_assign_opc", 00:04:54.668 "accel_get_module_info", 00:04:54.668 "accel_get_opc_assignments", 00:04:54.668 "vmd_rescan", 00:04:54.668 "vmd_remove_device", 00:04:54.668 "vmd_enable", 00:04:54.668 "sock_get_default_impl", 00:04:54.668 "sock_set_default_impl", 00:04:54.668 "sock_impl_set_options", 00:04:54.668 "sock_impl_get_options", 00:04:54.668 "iobuf_get_stats", 00:04:54.668 "iobuf_set_options", 00:04:54.668 "keyring_get_keys", 00:04:54.668 "framework_get_pci_devices", 00:04:54.668 "framework_get_config", 00:04:54.668 "framework_get_subsystems", 00:04:54.668 "fsdev_set_opts", 00:04:54.668 "fsdev_get_opts", 00:04:54.668 "trace_get_info", 00:04:54.668 "trace_get_tpoint_group_mask", 00:04:54.668 "trace_disable_tpoint_group", 00:04:54.668 "trace_enable_tpoint_group", 00:04:54.668 "trace_clear_tpoint_mask", 00:04:54.668 "trace_set_tpoint_mask", 00:04:54.668 "notify_get_notifications", 00:04:54.668 "notify_get_types", 00:04:54.668 "spdk_get_version", 00:04:54.668 "rpc_get_methods" 00:04:54.668 ] 00:04:54.668 17:53:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:54.668 17:53:02 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:54.668 17:53:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.668 17:53:02 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:54.668 17:53:02 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2170775 00:04:54.668 17:53:02 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2170775 ']' 00:04:54.668 17:53:02 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2170775 00:04:54.668 17:53:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:54.668 17:53:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.668 17:53:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2170775 00:04:54.668 17:53:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.668 17:53:02 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.668 17:53:02 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2170775' 00:04:54.668 killing process with pid 2170775 00:04:54.668 17:53:02 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2170775 00:04:54.668 17:53:02 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2170775 00:04:55.236 00:04:55.236 real 0m1.687s 00:04:55.236 user 0m3.064s 00:04:55.236 sys 0m0.555s 00:04:55.236 17:53:02 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.236 17:53:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.237 ************************************ 00:04:55.237 END TEST spdkcli_tcp 00:04:55.237 ************************************ 00:04:55.237 17:53:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:55.237 17:53:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.237 17:53:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.237 17:53:02 -- common/autotest_common.sh@10 -- # set +x 00:04:55.237 ************************************ 00:04:55.237 START TEST dpdk_mem_utility 00:04:55.237 ************************************ 00:04:55.237 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:55.237 * Looking for test storage... 00:04:55.237 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:04:55.237 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.237 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.237 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.237 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.237 17:53:03 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:55.237 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.237 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.237 --rc genhtml_branch_coverage=1 00:04:55.237 --rc genhtml_function_coverage=1 00:04:55.237 --rc genhtml_legend=1 00:04:55.237 --rc geninfo_all_blocks=1 00:04:55.237 --rc geninfo_unexecuted_blocks=1 00:04:55.237 00:04:55.237 ' 00:04:55.237 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.237 --rc genhtml_branch_coverage=1 00:04:55.237 --rc genhtml_function_coverage=1 00:04:55.237 --rc genhtml_legend=1 00:04:55.237 --rc geninfo_all_blocks=1 00:04:55.237 --rc geninfo_unexecuted_blocks=1 00:04:55.237 00:04:55.237 ' 00:04:55.237 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.237 --rc genhtml_branch_coverage=1 00:04:55.237 --rc genhtml_function_coverage=1 00:04:55.237 --rc genhtml_legend=1 00:04:55.237 --rc geninfo_all_blocks=1 00:04:55.237 --rc geninfo_unexecuted_blocks=1 00:04:55.237 00:04:55.237 ' 00:04:55.237 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.237 --rc genhtml_branch_coverage=1 00:04:55.237 --rc genhtml_function_coverage=1 00:04:55.237 --rc genhtml_legend=1 00:04:55.237 --rc geninfo_all_blocks=1 00:04:55.237 --rc geninfo_unexecuted_blocks=1 00:04:55.237 00:04:55.237 ' 00:04:55.237 17:53:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:55.237 17:53:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2171123 00:04:55.237 17:53:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2171123 00:04:55.237 17:53:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.237 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2171123 ']' 00:04:55.237 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.237 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.237 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.237 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.237 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.497 [2024-12-09 17:53:03.257898] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:55.497 [2024-12-09 17:53:03.257967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2171123 ] 00:04:55.497 [2024-12-09 17:53:03.332701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.497 [2024-12-09 17:53:03.372556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.757 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.757 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:55.757 17:53:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:55.757 17:53:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:55.757 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.757 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.757 { 00:04:55.757 "filename": "/tmp/spdk_mem_dump.txt" 00:04:55.757 } 00:04:55.757 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.757 17:53:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:55.757 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:55.757 1 heaps totaling size 818.000000 MiB 00:04:55.757 size: 818.000000 MiB heap id: 0 00:04:55.757 end heaps---------- 00:04:55.757 9 mempools totaling size 603.782043 MiB 00:04:55.757 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:55.757 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:55.757 size: 100.555481 MiB name: bdev_io_2171123 00:04:55.757 size: 50.003479 MiB name: msgpool_2171123 00:04:55.757 size: 36.509338 MiB name: fsdev_io_2171123 00:04:55.757 size: 21.763794 MiB name: PDU_Pool 00:04:55.757 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:55.757 size: 4.133484 MiB name: evtpool_2171123 00:04:55.757 size: 0.026123 MiB name: Session_Pool 00:04:55.757 end mempools------- 00:04:55.757 6 memzones totaling size 4.142822 MiB 00:04:55.757 size: 1.000366 MiB name: RG_ring_0_2171123 00:04:55.757 size: 1.000366 MiB name: RG_ring_1_2171123 00:04:55.757 size: 1.000366 MiB name: RG_ring_4_2171123 00:04:55.757 size: 1.000366 MiB name: RG_ring_5_2171123 00:04:55.757 size: 0.125366 MiB name: RG_ring_2_2171123 00:04:55.757 size: 0.015991 MiB name: RG_ring_3_2171123 00:04:55.757 end memzones------- 00:04:55.757 17:53:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:55.757 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:55.757 list of free elements. size: 10.852478 MiB 00:04:55.757 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:55.757 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:55.757 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:55.757 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:55.757 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:55.757 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:55.757 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:55.757 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:55.757 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:55.757 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:55.757 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:55.757 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:55.757 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:55.757 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:55.757 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:55.757 list of standard malloc elements. size: 199.218628 MiB 00:04:55.757 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:55.757 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:55.757 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:55.757 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:55.757 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:55.757 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:55.757 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:55.757 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:55.757 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:55.757 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:55.757 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:55.757 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:55.757 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:55.757 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:55.757 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:55.757 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:55.757 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:55.757 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:55.757 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:55.757 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:55.757 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:55.757 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:55.757 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:55.757 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:55.757 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:55.757 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:55.757 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:55.757 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:55.757 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:55.757 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:55.757 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:55.757 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:55.757 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:55.757 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:55.757 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:55.757 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:55.757 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:55.757 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:55.757 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:55.757 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:55.757 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:55.757 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:55.757 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:55.757 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:55.757 list of memzone associated elements. size: 607.928894 MiB 00:04:55.757 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:55.757 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:55.757 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:55.757 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:55.757 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:55.757 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2171123_0 00:04:55.757 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:55.757 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2171123_0 00:04:55.757 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:55.757 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2171123_0 00:04:55.758 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:55.758 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:55.758 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:55.758 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:55.758 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:55.758 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2171123_0 00:04:55.758 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:55.758 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2171123 00:04:55.758 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:55.758 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2171123 00:04:55.758 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:55.758 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:55.758 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:55.758 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:55.758 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:55.758 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:55.758 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:55.758 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:55.758 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:55.758 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2171123 00:04:55.758 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:55.758 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2171123 00:04:55.758 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:55.758 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2171123 00:04:55.758 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:55.758 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2171123 00:04:55.758 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:55.758 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2171123 00:04:55.758 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:55.758 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2171123 00:04:55.758 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:55.758 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:55.758 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:55.758 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:55.758 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:55.758 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:55.758 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:55.758 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2171123 00:04:55.758 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:55.758 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2171123 00:04:55.758 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:55.758 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:55.758 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:55.758 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:55.758 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:55.758 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2171123 00:04:55.758 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:55.758 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:55.758 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:55.758 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2171123 00:04:55.758 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:55.758 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2171123 00:04:55.758 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:55.758 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2171123 00:04:55.758 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:55.758 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:55.758 17:53:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:55.758 17:53:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2171123 00:04:55.758 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2171123 ']' 00:04:55.758 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2171123 00:04:55.758 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:55.758 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.758 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2171123 00:04:56.017 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.017 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.017 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2171123' 00:04:56.017 killing process with pid 2171123 00:04:56.017 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2171123 00:04:56.017 17:53:03 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2171123 00:04:56.277 00:04:56.277 real 0m1.059s 00:04:56.277 user 0m0.986s 00:04:56.277 sys 0m0.447s 00:04:56.277 17:53:04 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.277 17:53:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.277 ************************************ 00:04:56.277 END TEST dpdk_mem_utility 00:04:56.277 ************************************ 00:04:56.277 17:53:04 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:56.277 17:53:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.277 17:53:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.277 17:53:04 -- common/autotest_common.sh@10 -- # set +x 00:04:56.277 ************************************ 00:04:56.277 START TEST event 00:04:56.277 ************************************ 00:04:56.277 17:53:04 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:56.277 * Looking for test storage... 00:04:56.537 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:04:56.537 17:53:04 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:56.537 17:53:04 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:56.537 17:53:04 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:56.537 17:53:04 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:56.537 17:53:04 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.537 17:53:04 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.537 17:53:04 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.537 17:53:04 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.537 17:53:04 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.537 17:53:04 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.537 17:53:04 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.537 17:53:04 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.537 17:53:04 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.537 17:53:04 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.537 17:53:04 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.537 17:53:04 event -- scripts/common.sh@344 -- # case "$op" in 00:04:56.537 17:53:04 event -- scripts/common.sh@345 -- # : 1 00:04:56.537 17:53:04 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.537 17:53:04 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.537 17:53:04 event -- scripts/common.sh@365 -- # decimal 1 00:04:56.537 17:53:04 event -- scripts/common.sh@353 -- # local d=1 00:04:56.537 17:53:04 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.537 17:53:04 event -- scripts/common.sh@355 -- # echo 1 00:04:56.537 17:53:04 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.537 17:53:04 event -- scripts/common.sh@366 -- # decimal 2 00:04:56.537 17:53:04 event -- scripts/common.sh@353 -- # local d=2 00:04:56.537 17:53:04 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.537 17:53:04 event -- scripts/common.sh@355 -- # echo 2 00:04:56.537 17:53:04 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.537 17:53:04 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.537 17:53:04 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.537 17:53:04 event -- scripts/common.sh@368 -- # return 0 00:04:56.537 17:53:04 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.537 17:53:04 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:56.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.537 --rc genhtml_branch_coverage=1 00:04:56.537 --rc genhtml_function_coverage=1 00:04:56.537 --rc genhtml_legend=1 00:04:56.537 --rc geninfo_all_blocks=1 00:04:56.537 --rc geninfo_unexecuted_blocks=1 00:04:56.537 00:04:56.537 ' 00:04:56.537 17:53:04 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:56.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.537 --rc genhtml_branch_coverage=1 00:04:56.537 --rc genhtml_function_coverage=1 00:04:56.537 --rc genhtml_legend=1 00:04:56.537 --rc geninfo_all_blocks=1 00:04:56.537 --rc geninfo_unexecuted_blocks=1 00:04:56.537 00:04:56.537 ' 00:04:56.537 17:53:04 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:56.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.537 --rc genhtml_branch_coverage=1 00:04:56.537 --rc genhtml_function_coverage=1 00:04:56.537 --rc genhtml_legend=1 00:04:56.537 --rc geninfo_all_blocks=1 00:04:56.537 --rc geninfo_unexecuted_blocks=1 00:04:56.537 00:04:56.537 ' 00:04:56.537 17:53:04 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:56.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.537 --rc genhtml_branch_coverage=1 00:04:56.537 --rc genhtml_function_coverage=1 00:04:56.537 --rc genhtml_legend=1 00:04:56.537 --rc geninfo_all_blocks=1 00:04:56.537 --rc geninfo_unexecuted_blocks=1 00:04:56.537 00:04:56.537 ' 00:04:56.537 17:53:04 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:56.537 17:53:04 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:56.537 17:53:04 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:56.537 17:53:04 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:56.537 17:53:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.537 17:53:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.537 ************************************ 00:04:56.537 START TEST event_perf 00:04:56.537 ************************************ 00:04:56.537 17:53:04 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:56.537 Running I/O for 1 seconds...[2024-12-09 17:53:04.417798] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:56.537 [2024-12-09 17:53:04.417860] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2171447 ] 00:04:56.537 [2024-12-09 17:53:04.510687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:56.797 [2024-12-09 17:53:04.555426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.797 [2024-12-09 17:53:04.555537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.797 [2024-12-09 17:53:04.555648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.797 [2024-12-09 17:53:04.555649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:57.731 Running I/O for 1 seconds... 00:04:57.731 lcore 0: 209898 00:04:57.731 lcore 1: 209897 00:04:57.731 lcore 2: 209898 00:04:57.731 lcore 3: 209897 00:04:57.731 done. 00:04:57.731 00:04:57.731 real 0m1.200s 00:04:57.731 user 0m4.098s 00:04:57.731 sys 0m0.099s 00:04:57.731 17:53:05 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.731 17:53:05 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:57.731 ************************************ 00:04:57.731 END TEST event_perf 00:04:57.731 ************************************ 00:04:57.731 17:53:05 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:57.731 17:53:05 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:57.731 17:53:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.731 17:53:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.731 ************************************ 00:04:57.731 START TEST event_reactor 00:04:57.731 ************************************ 00:04:57.731 17:53:05 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:57.731 [2024-12-09 17:53:05.705909] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:57.731 [2024-12-09 17:53:05.705997] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2171618 ] 00:04:57.990 [2024-12-09 17:53:05.802851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.990 [2024-12-09 17:53:05.843163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.928 test_start 00:04:58.928 oneshot 00:04:58.928 tick 100 00:04:58.928 tick 100 00:04:58.928 tick 250 00:04:58.928 tick 100 00:04:58.928 tick 100 00:04:58.928 tick 100 00:04:58.928 tick 250 00:04:58.928 tick 500 00:04:58.928 tick 100 00:04:58.928 tick 100 00:04:58.928 tick 250 00:04:58.928 tick 100 00:04:58.928 tick 100 00:04:58.928 test_end 00:04:58.928 00:04:58.928 real 0m1.200s 00:04:58.928 user 0m1.099s 00:04:58.928 sys 0m0.097s 00:04:58.928 17:53:06 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.928 17:53:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:58.928 ************************************ 00:04:58.928 END TEST event_reactor 00:04:58.928 ************************************ 00:04:59.187 17:53:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:59.187 17:53:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:59.187 17:53:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.187 17:53:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.187 ************************************ 00:04:59.187 START TEST event_reactor_perf 00:04:59.187 ************************************ 00:04:59.187 17:53:06 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:59.187 [2024-12-09 17:53:06.990193] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:04:59.187 [2024-12-09 17:53:06.990271] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2171779 ] 00:04:59.187 [2024-12-09 17:53:07.087926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.187 [2024-12-09 17:53:07.126198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.567 test_start 00:05:00.567 test_end 00:05:00.567 Performance: 533548 events per second 00:05:00.567 00:05:00.567 real 0m1.198s 00:05:00.567 user 0m1.097s 00:05:00.567 sys 0m0.096s 00:05:00.567 17:53:08 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.567 17:53:08 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:00.567 ************************************ 00:05:00.567 END TEST event_reactor_perf 00:05:00.567 ************************************ 00:05:00.567 17:53:08 event -- event/event.sh@49 -- # uname -s 00:05:00.567 17:53:08 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:00.568 17:53:08 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:00.568 17:53:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.568 17:53:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.568 17:53:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.568 ************************************ 00:05:00.568 START TEST event_scheduler 00:05:00.568 ************************************ 00:05:00.568 17:53:08 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:00.568 * Looking for test storage... 00:05:00.568 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:00.568 17:53:08 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:00.568 17:53:08 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:00.568 17:53:08 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:00.568 17:53:08 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.568 17:53:08 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:00.568 17:53:08 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.568 17:53:08 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:00.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.568 --rc genhtml_branch_coverage=1 00:05:00.568 --rc genhtml_function_coverage=1 00:05:00.568 --rc genhtml_legend=1 00:05:00.568 --rc geninfo_all_blocks=1 00:05:00.568 --rc geninfo_unexecuted_blocks=1 00:05:00.568 00:05:00.568 ' 00:05:00.568 17:53:08 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:00.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.568 --rc genhtml_branch_coverage=1 00:05:00.568 --rc genhtml_function_coverage=1 00:05:00.568 --rc genhtml_legend=1 00:05:00.568 --rc geninfo_all_blocks=1 00:05:00.568 --rc geninfo_unexecuted_blocks=1 00:05:00.568 00:05:00.568 ' 00:05:00.568 17:53:08 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:00.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.568 --rc genhtml_branch_coverage=1 00:05:00.568 --rc genhtml_function_coverage=1 00:05:00.568 --rc genhtml_legend=1 00:05:00.568 --rc geninfo_all_blocks=1 00:05:00.568 --rc geninfo_unexecuted_blocks=1 00:05:00.568 00:05:00.568 ' 00:05:00.568 17:53:08 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:00.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.568 --rc genhtml_branch_coverage=1 00:05:00.568 --rc genhtml_function_coverage=1 00:05:00.568 --rc genhtml_legend=1 00:05:00.568 --rc geninfo_all_blocks=1 00:05:00.568 --rc geninfo_unexecuted_blocks=1 00:05:00.568 00:05:00.568 ' 00:05:00.568 17:53:08 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:00.568 17:53:08 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:00.568 17:53:08 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2172099 00:05:00.568 17:53:08 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.568 17:53:08 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2172099 00:05:00.568 17:53:08 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2172099 ']' 00:05:00.568 17:53:08 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.568 17:53:08 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.568 17:53:08 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.568 17:53:08 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.568 17:53:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.568 [2024-12-09 17:53:08.485480] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:00.568 [2024-12-09 17:53:08.485533] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172099 ] 00:05:00.827 [2024-12-09 17:53:08.576431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:00.827 [2024-12-09 17:53:08.618887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.827 [2024-12-09 17:53:08.618999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.827 [2024-12-09 17:53:08.619038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.827 [2024-12-09 17:53:08.619039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.827 17:53:08 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.827 17:53:08 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:00.827 17:53:08 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:00.827 17:53:08 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.827 17:53:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.827 [2024-12-09 17:53:08.659752] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:00.827 [2024-12-09 17:53:08.659771] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:00.827 [2024-12-09 17:53:08.659782] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:00.827 [2024-12-09 17:53:08.659790] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:00.827 [2024-12-09 17:53:08.659797] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:00.827 17:53:08 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.828 17:53:08 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:00.828 17:53:08 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.828 17:53:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.828 [2024-12-09 17:53:08.739074] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:00.828 17:53:08 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.828 17:53:08 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:00.828 17:53:08 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.828 17:53:08 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.828 17:53:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.828 ************************************ 00:05:00.828 START TEST scheduler_create_thread 00:05:00.828 ************************************ 00:05:00.828 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:00.828 17:53:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:00.828 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.828 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.828 2 00:05:00.828 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.828 17:53:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:00.828 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.828 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.828 3 00:05:00.828 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.828 17:53:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:00.828 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.828 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.087 4 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.087 5 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.087 6 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.087 7 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.087 8 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.087 9 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.087 10 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.087 17:53:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.464 17:53:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.464 17:53:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:02.464 17:53:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:02.464 17:53:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.464 17:53:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.842 17:53:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.842 00:05:03.842 real 0m2.619s 00:05:03.842 user 0m0.025s 00:05:03.842 sys 0m0.007s 00:05:03.842 17:53:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.842 17:53:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.842 ************************************ 00:05:03.842 END TEST scheduler_create_thread 00:05:03.842 ************************************ 00:05:03.842 17:53:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:03.842 17:53:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2172099 00:05:03.842 17:53:11 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2172099 ']' 00:05:03.842 17:53:11 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2172099 00:05:03.842 17:53:11 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:03.842 17:53:11 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.842 17:53:11 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2172099 00:05:03.842 17:53:11 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:03.842 17:53:11 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:03.842 17:53:11 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2172099' 00:05:03.842 killing process with pid 2172099 00:05:03.842 17:53:11 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2172099 00:05:03.842 17:53:11 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2172099 00:05:04.101 [2024-12-09 17:53:11.877474] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:04.101 00:05:04.101 real 0m3.795s 00:05:04.101 user 0m5.635s 00:05:04.101 sys 0m0.434s 00:05:04.101 17:53:12 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.102 17:53:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:04.102 ************************************ 00:05:04.102 END TEST event_scheduler 00:05:04.102 ************************************ 00:05:04.361 17:53:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:04.361 17:53:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:04.361 17:53:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.361 17:53:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.361 17:53:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.361 ************************************ 00:05:04.361 START TEST app_repeat 00:05:04.361 ************************************ 00:05:04.361 17:53:12 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:04.361 17:53:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.361 17:53:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.361 17:53:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:04.361 17:53:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.361 17:53:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:04.361 17:53:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:04.361 17:53:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:04.361 17:53:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2172933 00:05:04.361 17:53:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.361 17:53:12 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:04.361 17:53:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2172933' 00:05:04.361 Process app_repeat pid: 2172933 00:05:04.361 17:53:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:04.361 17:53:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:04.361 spdk_app_start Round 0 00:05:04.361 17:53:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2172933 /var/tmp/spdk-nbd.sock 00:05:04.361 17:53:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2172933 ']' 00:05:04.361 17:53:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.361 17:53:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.361 17:53:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.361 17:53:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.361 17:53:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.361 [2024-12-09 17:53:12.161654] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:04.361 [2024-12-09 17:53:12.161720] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172933 ] 00:05:04.361 [2024-12-09 17:53:12.253949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.361 [2024-12-09 17:53:12.295411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.361 [2024-12-09 17:53:12.295412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.620 17:53:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.620 17:53:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:04.620 17:53:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.620 Malloc0 00:05:04.620 17:53:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.879 Malloc1 00:05:04.879 17:53:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.879 17:53:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.879 17:53:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.879 17:53:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.880 17:53:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.880 17:53:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.880 17:53:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.880 17:53:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.880 17:53:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.880 17:53:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.880 17:53:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.880 17:53:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.880 17:53:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.880 17:53:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.880 17:53:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.880 17:53:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.139 /dev/nbd0 00:05:05.139 17:53:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.139 17:53:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.139 17:53:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:05.139 17:53:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:05.139 17:53:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:05.139 17:53:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:05.139 17:53:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:05.139 17:53:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:05.139 17:53:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:05.139 17:53:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:05.139 17:53:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.139 1+0 records in 00:05:05.139 1+0 records out 00:05:05.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265197 s, 15.4 MB/s 00:05:05.139 17:53:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:05.139 17:53:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:05.139 17:53:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:05.139 17:53:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:05.139 17:53:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:05.139 17:53:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.139 17:53:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.139 17:53:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.398 /dev/nbd1 00:05:05.398 17:53:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.398 17:53:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.398 17:53:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:05.398 17:53:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:05.398 17:53:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:05.398 17:53:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:05.398 17:53:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:05.398 17:53:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:05.398 17:53:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:05.398 17:53:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:05.398 17:53:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.398 1+0 records in 00:05:05.398 1+0 records out 00:05:05.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253446 s, 16.2 MB/s 00:05:05.398 17:53:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:05.398 17:53:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:05.398 17:53:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:05.398 17:53:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:05.398 17:53:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:05.398 17:53:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.398 17:53:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.398 17:53:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.398 17:53:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.398 17:53:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.657 { 00:05:05.657 "nbd_device": "/dev/nbd0", 00:05:05.657 "bdev_name": "Malloc0" 00:05:05.657 }, 00:05:05.657 { 00:05:05.657 "nbd_device": "/dev/nbd1", 00:05:05.657 "bdev_name": "Malloc1" 00:05:05.657 } 00:05:05.657 ]' 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.657 { 00:05:05.657 "nbd_device": "/dev/nbd0", 00:05:05.657 "bdev_name": "Malloc0" 00:05:05.657 }, 00:05:05.657 { 00:05:05.657 "nbd_device": "/dev/nbd1", 00:05:05.657 "bdev_name": "Malloc1" 00:05:05.657 } 00:05:05.657 ]' 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.657 /dev/nbd1' 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.657 /dev/nbd1' 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.657 256+0 records in 00:05:05.657 256+0 records out 00:05:05.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00454839 s, 231 MB/s 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.657 256+0 records in 00:05:05.657 256+0 records out 00:05:05.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192316 s, 54.5 MB/s 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.657 17:53:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.657 256+0 records in 00:05:05.658 256+0 records out 00:05:05.658 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204136 s, 51.4 MB/s 00:05:05.658 17:53:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.658 17:53:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.916 17:53:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.916 17:53:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.916 17:53:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.916 17:53:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.916 17:53:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.916 17:53:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.916 17:53:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.917 17:53:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.176 17:53:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.176 17:53:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.176 17:53:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.176 17:53:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.176 17:53:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.176 17:53:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.176 17:53:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.176 17:53:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.176 17:53:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.176 17:53:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.176 17:53:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.435 17:53:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.435 17:53:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.435 17:53:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.435 17:53:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.435 17:53:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.435 17:53:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.435 17:53:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:06.435 17:53:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.435 17:53:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.435 17:53:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.435 17:53:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.435 17:53:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.435 17:53:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.694 17:53:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:06.694 [2024-12-09 17:53:14.666003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.953 [2024-12-09 17:53:14.702043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.953 [2024-12-09 17:53:14.702044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.953 [2024-12-09 17:53:14.743096] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.953 [2024-12-09 17:53:14.743136] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.596 17:53:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.596 17:53:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:09.596 spdk_app_start Round 1 00:05:09.596 17:53:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2172933 /var/tmp/spdk-nbd.sock 00:05:09.596 17:53:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2172933 ']' 00:05:09.596 17:53:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.596 17:53:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.596 17:53:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.596 17:53:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.596 17:53:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.854 17:53:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.854 17:53:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:09.854 17:53:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.113 Malloc0 00:05:10.113 17:53:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.372 Malloc1 00:05:10.372 17:53:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.372 17:53:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.372 17:53:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.372 17:53:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.372 17:53:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.372 17:53:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.372 17:53:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.372 17:53:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.372 17:53:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.372 17:53:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.372 17:53:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.372 17:53:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.372 17:53:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:10.372 17:53:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.372 17:53:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.372 17:53:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.631 /dev/nbd0 00:05:10.631 17:53:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.631 17:53:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.631 17:53:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:10.631 17:53:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:10.631 17:53:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:10.631 17:53:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:10.631 17:53:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:10.631 17:53:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:10.631 17:53:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:10.631 17:53:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:10.631 17:53:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.631 1+0 records in 00:05:10.631 1+0 records out 00:05:10.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277772 s, 14.7 MB/s 00:05:10.631 17:53:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:10.631 17:53:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:10.631 17:53:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:10.631 17:53:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:10.631 17:53:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:10.631 17:53:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.631 17:53:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.631 17:53:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.890 /dev/nbd1 00:05:10.890 17:53:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.890 17:53:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.890 17:53:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:10.890 17:53:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:10.890 17:53:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:10.890 17:53:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:10.890 17:53:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:10.890 17:53:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:10.890 17:53:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:10.890 17:53:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:10.890 17:53:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.890 1+0 records in 00:05:10.890 1+0 records out 00:05:10.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000130007 s, 31.5 MB/s 00:05:10.890 17:53:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:10.890 17:53:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:10.890 17:53:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:10.890 17:53:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:10.890 17:53:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:10.890 17:53:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.890 17:53:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.890 17:53:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.890 17:53:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.890 17:53:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.890 17:53:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.890 { 00:05:10.890 "nbd_device": "/dev/nbd0", 00:05:10.890 "bdev_name": "Malloc0" 00:05:10.890 }, 00:05:10.890 { 00:05:10.890 "nbd_device": "/dev/nbd1", 00:05:10.890 "bdev_name": "Malloc1" 00:05:10.890 } 00:05:10.890 ]' 00:05:11.149 17:53:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.149 { 00:05:11.149 "nbd_device": "/dev/nbd0", 00:05:11.149 "bdev_name": "Malloc0" 00:05:11.149 }, 00:05:11.149 { 00:05:11.149 "nbd_device": "/dev/nbd1", 00:05:11.149 "bdev_name": "Malloc1" 00:05:11.149 } 00:05:11.149 ]' 00:05:11.149 17:53:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.150 /dev/nbd1' 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.150 /dev/nbd1' 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.150 256+0 records in 00:05:11.150 256+0 records out 00:05:11.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107527 s, 97.5 MB/s 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.150 256+0 records in 00:05:11.150 256+0 records out 00:05:11.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193022 s, 54.3 MB/s 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.150 256+0 records in 00:05:11.150 256+0 records out 00:05:11.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207187 s, 50.6 MB/s 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.150 17:53:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.150 17:53:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.150 17:53:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.150 17:53:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.150 17:53:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.150 17:53:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:11.150 17:53:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.150 17:53:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.410 17:53:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.410 17:53:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.410 17:53:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.410 17:53:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.410 17:53:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.410 17:53:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.410 17:53:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.410 17:53:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.410 17:53:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.410 17:53:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:11.669 17:53:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:11.669 17:53:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:11.669 17:53:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:11.669 17:53:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.669 17:53:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.669 17:53:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:11.669 17:53:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.669 17:53:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.669 17:53:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.669 17:53:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.669 17:53:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.669 17:53:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:11.669 17:53:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:11.669 17:53:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.928 17:53:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:11.928 17:53:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:11.928 17:53:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.928 17:53:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:11.928 17:53:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:11.928 17:53:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:11.928 17:53:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:11.928 17:53:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:11.928 17:53:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:11.928 17:53:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.187 17:53:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:12.187 [2024-12-09 17:53:20.056543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.187 [2024-12-09 17:53:20.102242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.187 [2024-12-09 17:53:20.102242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.187 [2024-12-09 17:53:20.144231] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.187 [2024-12-09 17:53:20.144273] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.490 17:53:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:15.490 17:53:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:15.490 spdk_app_start Round 2 00:05:15.490 17:53:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2172933 /var/tmp/spdk-nbd.sock 00:05:15.490 17:53:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2172933 ']' 00:05:15.490 17:53:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.490 17:53:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.490 17:53:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.490 17:53:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.490 17:53:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.490 17:53:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.490 17:53:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:15.490 17:53:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.490 Malloc0 00:05:15.490 17:53:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.749 Malloc1 00:05:15.749 17:53:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.749 17:53:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.749 17:53:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.749 17:53:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.749 17:53:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.749 17:53:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.749 17:53:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.749 17:53:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.749 17:53:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.749 17:53:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.749 17:53:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.749 17:53:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.750 17:53:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.750 17:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.750 17:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.750 17:53:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.750 /dev/nbd0 00:05:16.009 17:53:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.009 17:53:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.009 1+0 records in 00:05:16.009 1+0 records out 00:05:16.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224504 s, 18.2 MB/s 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:16.009 17:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.009 17:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.009 17:53:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:16.009 /dev/nbd1 00:05:16.009 17:53:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:16.009 17:53:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:16.009 17:53:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.268 1+0 records in 00:05:16.268 1+0 records out 00:05:16.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260744 s, 15.7 MB/s 00:05:16.268 17:53:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:16.268 17:53:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:16.268 17:53:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:16.268 17:53:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:16.268 17:53:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:16.268 17:53:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.268 17:53:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.268 17:53:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.268 17:53:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.268 17:53:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.268 17:53:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:16.268 { 00:05:16.268 "nbd_device": "/dev/nbd0", 00:05:16.268 "bdev_name": "Malloc0" 00:05:16.268 }, 00:05:16.268 { 00:05:16.268 "nbd_device": "/dev/nbd1", 00:05:16.268 "bdev_name": "Malloc1" 00:05:16.268 } 00:05:16.268 ]' 00:05:16.268 17:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:16.268 { 00:05:16.268 "nbd_device": "/dev/nbd0", 00:05:16.268 "bdev_name": "Malloc0" 00:05:16.268 }, 00:05:16.268 { 00:05:16.268 "nbd_device": "/dev/nbd1", 00:05:16.268 "bdev_name": "Malloc1" 00:05:16.268 } 00:05:16.268 ]' 00:05:16.268 17:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:16.527 /dev/nbd1' 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:16.527 /dev/nbd1' 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:16.527 256+0 records in 00:05:16.527 256+0 records out 00:05:16.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116806 s, 89.8 MB/s 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:16.527 256+0 records in 00:05:16.527 256+0 records out 00:05:16.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195924 s, 53.5 MB/s 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:16.527 256+0 records in 00:05:16.527 256+0 records out 00:05:16.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205033 s, 51.1 MB/s 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:16.527 17:53:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:16.528 17:53:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:16.528 17:53:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.528 17:53:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.528 17:53:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:16.528 17:53:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:16.528 17:53:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.528 17:53:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.787 17:53:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.787 17:53:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.787 17:53:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.787 17:53:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.787 17:53:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.787 17:53:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.787 17:53:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.787 17:53:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.787 17:53:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.787 17:53:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:17.046 17:53:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:17.046 17:53:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:17.046 17:53:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:17.046 17:53:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.046 17:53:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.046 17:53:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:17.046 17:53:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.046 17:53:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.046 17:53:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.046 17:53:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.046 17:53:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.046 17:53:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:17.046 17:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:17.046 17:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.310 17:53:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:17.310 17:53:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:17.310 17:53:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.310 17:53:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:17.310 17:53:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:17.310 17:53:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:17.310 17:53:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:17.310 17:53:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:17.310 17:53:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:17.310 17:53:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.310 17:53:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:17.571 [2024-12-09 17:53:25.394485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.571 [2024-12-09 17:53:25.430006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.571 [2024-12-09 17:53:25.430007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.571 [2024-12-09 17:53:25.470876] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.571 [2024-12-09 17:53:25.470918] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.860 17:53:28 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2172933 /var/tmp/spdk-nbd.sock 00:05:20.860 17:53:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2172933 ']' 00:05:20.860 17:53:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.860 17:53:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.860 17:53:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.860 17:53:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.860 17:53:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.860 17:53:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.860 17:53:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:20.860 17:53:28 event.app_repeat -- event/event.sh@39 -- # killprocess 2172933 00:05:20.860 17:53:28 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2172933 ']' 00:05:20.860 17:53:28 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2172933 00:05:20.860 17:53:28 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:20.860 17:53:28 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.860 17:53:28 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2172933 00:05:20.861 17:53:28 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.861 17:53:28 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.861 17:53:28 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2172933' 00:05:20.861 killing process with pid 2172933 00:05:20.861 17:53:28 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2172933 00:05:20.861 17:53:28 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2172933 00:05:20.861 spdk_app_start is called in Round 0. 00:05:20.861 Shutdown signal received, stop current app iteration 00:05:20.861 Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 reinitialization... 00:05:20.861 spdk_app_start is called in Round 1. 00:05:20.861 Shutdown signal received, stop current app iteration 00:05:20.861 Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 reinitialization... 00:05:20.861 spdk_app_start is called in Round 2. 00:05:20.861 Shutdown signal received, stop current app iteration 00:05:20.861 Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 reinitialization... 00:05:20.861 spdk_app_start is called in Round 3. 00:05:20.861 Shutdown signal received, stop current app iteration 00:05:20.861 17:53:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:20.861 17:53:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:20.861 00:05:20.861 real 0m16.527s 00:05:20.861 user 0m35.891s 00:05:20.861 sys 0m3.036s 00:05:20.861 17:53:28 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.861 17:53:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.861 ************************************ 00:05:20.861 END TEST app_repeat 00:05:20.861 ************************************ 00:05:20.861 17:53:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:20.861 17:53:28 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:20.861 17:53:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.861 17:53:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.861 17:53:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.861 ************************************ 00:05:20.861 START TEST cpu_locks 00:05:20.861 ************************************ 00:05:20.861 17:53:28 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:20.861 * Looking for test storage... 00:05:21.120 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:21.120 17:53:28 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:21.121 17:53:28 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:21.121 17:53:28 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:21.121 17:53:28 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.121 17:53:28 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:21.121 17:53:28 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.121 17:53:28 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:21.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.121 --rc genhtml_branch_coverage=1 00:05:21.121 --rc genhtml_function_coverage=1 00:05:21.121 --rc genhtml_legend=1 00:05:21.121 --rc geninfo_all_blocks=1 00:05:21.121 --rc geninfo_unexecuted_blocks=1 00:05:21.121 00:05:21.121 ' 00:05:21.121 17:53:28 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:21.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.121 --rc genhtml_branch_coverage=1 00:05:21.121 --rc genhtml_function_coverage=1 00:05:21.121 --rc genhtml_legend=1 00:05:21.121 --rc geninfo_all_blocks=1 00:05:21.121 --rc geninfo_unexecuted_blocks=1 00:05:21.121 00:05:21.121 ' 00:05:21.121 17:53:28 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:21.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.121 --rc genhtml_branch_coverage=1 00:05:21.121 --rc genhtml_function_coverage=1 00:05:21.121 --rc genhtml_legend=1 00:05:21.121 --rc geninfo_all_blocks=1 00:05:21.121 --rc geninfo_unexecuted_blocks=1 00:05:21.121 00:05:21.121 ' 00:05:21.121 17:53:28 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:21.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.121 --rc genhtml_branch_coverage=1 00:05:21.121 --rc genhtml_function_coverage=1 00:05:21.121 --rc genhtml_legend=1 00:05:21.121 --rc geninfo_all_blocks=1 00:05:21.121 --rc geninfo_unexecuted_blocks=1 00:05:21.121 00:05:21.121 ' 00:05:21.121 17:53:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:21.121 17:53:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:21.121 17:53:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:21.121 17:53:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:21.121 17:53:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.121 17:53:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.121 17:53:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.121 ************************************ 00:05:21.121 START TEST default_locks 00:05:21.121 ************************************ 00:05:21.121 17:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:21.121 17:53:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2176033 00:05:21.121 17:53:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2176033 00:05:21.121 17:53:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.121 17:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2176033 ']' 00:05:21.121 17:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.121 17:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.121 17:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.121 17:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.121 17:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.121 [2024-12-09 17:53:29.028864] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:21.121 [2024-12-09 17:53:29.028909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2176033 ] 00:05:21.381 [2024-12-09 17:53:29.099040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.381 [2024-12-09 17:53:29.139126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.381 17:53:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.381 17:53:29 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:21.381 17:53:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2176033 00:05:21.381 17:53:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2176033 00:05:21.381 17:53:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.319 lslocks: write error 00:05:22.319 17:53:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2176033 00:05:22.319 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2176033 ']' 00:05:22.319 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2176033 00:05:22.319 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:22.319 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.319 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2176033 00:05:22.319 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.319 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.319 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2176033' 00:05:22.319 killing process with pid 2176033 00:05:22.319 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2176033 00:05:22.319 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2176033 00:05:22.578 17:53:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2176033 00:05:22.578 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2176033 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2176033 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2176033 ']' 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.579 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2176033) - No such process 00:05:22.579 ERROR: process (pid: 2176033) is no longer running 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:22.579 00:05:22.579 real 0m1.443s 00:05:22.579 user 0m1.432s 00:05:22.579 sys 0m0.708s 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.579 17:53:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.579 ************************************ 00:05:22.579 END TEST default_locks 00:05:22.579 ************************************ 00:05:22.579 17:53:30 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:22.579 17:53:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.579 17:53:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.579 17:53:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.579 ************************************ 00:05:22.579 START TEST default_locks_via_rpc 00:05:22.579 ************************************ 00:05:22.579 17:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:22.579 17:53:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2176313 00:05:22.579 17:53:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2176313 00:05:22.579 17:53:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.579 17:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2176313 ']' 00:05:22.579 17:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.579 17:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.579 17:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.579 17:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.579 17:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.838 [2024-12-09 17:53:30.561257] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:22.838 [2024-12-09 17:53:30.561308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2176313 ] 00:05:22.838 [2024-12-09 17:53:30.652486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.838 [2024-12-09 17:53:30.692927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.776 17:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.776 17:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:23.776 17:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:23.776 17:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.776 17:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.776 17:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.776 17:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:23.776 17:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:23.776 17:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:23.776 17:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:23.776 17:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:23.776 17:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.776 17:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.776 17:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.776 17:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2176313 00:05:23.776 17:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2176313 00:05:23.776 17:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.035 17:53:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2176313 00:05:24.035 17:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2176313 ']' 00:05:24.035 17:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2176313 00:05:24.035 17:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:24.035 17:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.035 17:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2176313 00:05:24.035 17:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.035 17:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.035 17:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2176313' 00:05:24.035 killing process with pid 2176313 00:05:24.035 17:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2176313 00:05:24.035 17:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2176313 00:05:24.295 00:05:24.295 real 0m1.613s 00:05:24.295 user 0m1.711s 00:05:24.295 sys 0m0.573s 00:05:24.295 17:53:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.295 17:53:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.295 ************************************ 00:05:24.295 END TEST default_locks_via_rpc 00:05:24.295 ************************************ 00:05:24.295 17:53:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:24.295 17:53:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.295 17:53:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.295 17:53:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.295 ************************************ 00:05:24.295 START TEST non_locking_app_on_locked_coremask 00:05:24.295 ************************************ 00:05:24.295 17:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:24.295 17:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2176690 00:05:24.295 17:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2176690 /var/tmp/spdk.sock 00:05:24.295 17:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.295 17:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2176690 ']' 00:05:24.295 17:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.295 17:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.295 17:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.295 17:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.295 17:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.295 [2024-12-09 17:53:32.246901] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:24.295 [2024-12-09 17:53:32.246945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2176690 ] 00:05:24.554 [2024-12-09 17:53:32.338213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.554 [2024-12-09 17:53:32.379815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.123 17:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.123 17:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:25.123 17:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2176711 00:05:25.123 17:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2176711 /var/tmp/spdk2.sock 00:05:25.123 17:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:25.123 17:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2176711 ']' 00:05:25.123 17:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.123 17:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.123 17:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.123 17:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.123 17:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.383 [2024-12-09 17:53:33.123208] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:25.383 [2024-12-09 17:53:33.123262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2176711 ] 00:05:25.383 [2024-12-09 17:53:33.232596] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.383 [2024-12-09 17:53:33.232624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.383 [2024-12-09 17:53:33.312434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.320 17:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.320 17:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:26.320 17:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2176690 00:05:26.321 17:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2176690 00:05:26.321 17:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.889 lslocks: write error 00:05:26.889 17:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2176690 00:05:26.889 17:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2176690 ']' 00:05:26.889 17:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2176690 00:05:26.889 17:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:26.889 17:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.889 17:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2176690 00:05:26.889 17:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.889 17:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.889 17:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2176690' 00:05:26.889 killing process with pid 2176690 00:05:26.889 17:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2176690 00:05:26.889 17:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2176690 00:05:27.458 17:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2176711 00:05:27.458 17:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2176711 ']' 00:05:27.458 17:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2176711 00:05:27.458 17:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:27.458 17:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.458 17:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2176711 00:05:27.458 17:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.458 17:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.458 17:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2176711' 00:05:27.458 killing process with pid 2176711 00:05:27.458 17:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2176711 00:05:27.458 17:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2176711 00:05:27.717 00:05:27.717 real 0m3.456s 00:05:27.717 user 0m3.735s 00:05:27.717 sys 0m1.100s 00:05:27.717 17:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.717 17:53:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.717 ************************************ 00:05:27.717 END TEST non_locking_app_on_locked_coremask 00:05:27.717 ************************************ 00:05:27.717 17:53:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:27.717 17:53:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.717 17:53:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.717 17:53:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.976 ************************************ 00:05:27.976 START TEST locking_app_on_unlocked_coremask 00:05:27.976 ************************************ 00:05:27.976 17:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:27.976 17:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2177282 00:05:27.976 17:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2177282 /var/tmp/spdk.sock 00:05:27.976 17:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:27.976 17:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2177282 ']' 00:05:27.976 17:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.976 17:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.976 17:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.976 17:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.976 17:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.976 [2024-12-09 17:53:35.786916] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:27.976 [2024-12-09 17:53:35.786971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2177282 ] 00:05:27.976 [2024-12-09 17:53:35.873773] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:27.976 [2024-12-09 17:53:35.873801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.976 [2024-12-09 17:53:35.914684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.914 17:53:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.914 17:53:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:28.914 17:53:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:28.914 17:53:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2177409 00:05:28.914 17:53:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2177409 /var/tmp/spdk2.sock 00:05:28.914 17:53:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2177409 ']' 00:05:28.914 17:53:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.914 17:53:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.914 17:53:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.914 17:53:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.914 17:53:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.914 [2024-12-09 17:53:36.639706] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:28.914 [2024-12-09 17:53:36.639757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2177409 ] 00:05:28.914 [2024-12-09 17:53:36.747051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.914 [2024-12-09 17:53:36.832760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.852 17:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.852 17:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:29.852 17:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2177409 00:05:29.852 17:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2177409 00:05:29.852 17:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.228 lslocks: write error 00:05:31.228 17:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2177282 00:05:31.228 17:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2177282 ']' 00:05:31.229 17:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2177282 00:05:31.229 17:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:31.229 17:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.229 17:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2177282 00:05:31.229 17:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.229 17:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.229 17:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2177282' 00:05:31.229 killing process with pid 2177282 00:05:31.229 17:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2177282 00:05:31.229 17:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2177282 00:05:31.797 17:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2177409 00:05:31.797 17:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2177409 ']' 00:05:31.797 17:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2177409 00:05:31.797 17:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:31.797 17:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.797 17:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2177409 00:05:31.797 17:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.797 17:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.797 17:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2177409' 00:05:31.797 killing process with pid 2177409 00:05:31.797 17:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2177409 00:05:31.797 17:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2177409 00:05:32.055 00:05:32.055 real 0m4.111s 00:05:32.055 user 0m4.450s 00:05:32.055 sys 0m1.334s 00:05:32.055 17:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.055 17:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.055 ************************************ 00:05:32.055 END TEST locking_app_on_unlocked_coremask 00:05:32.055 ************************************ 00:05:32.055 17:53:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:32.055 17:53:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.055 17:53:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.055 17:53:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.055 ************************************ 00:05:32.055 START TEST locking_app_on_locked_coremask 00:05:32.055 ************************************ 00:05:32.055 17:53:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:32.055 17:53:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.055 17:53:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2178089 00:05:32.055 17:53:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2178089 /var/tmp/spdk.sock 00:05:32.055 17:53:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2178089 ']' 00:05:32.055 17:53:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.055 17:53:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.055 17:53:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.055 17:53:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.056 17:53:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.056 [2024-12-09 17:53:39.970715] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:32.056 [2024-12-09 17:53:39.970758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178089 ] 00:05:32.315 [2024-12-09 17:53:40.061470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.315 [2024-12-09 17:53:40.107772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.883 17:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.883 17:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:32.883 17:53:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2178129 00:05:32.883 17:53:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2178129 /var/tmp/spdk2.sock 00:05:32.883 17:53:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:32.883 17:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:32.883 17:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2178129 /var/tmp/spdk2.sock 00:05:32.883 17:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:32.883 17:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.883 17:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:32.883 17:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.883 17:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2178129 /var/tmp/spdk2.sock 00:05:32.883 17:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2178129 ']' 00:05:32.883 17:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.883 17:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.883 17:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.883 17:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.883 17:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.142 [2024-12-09 17:53:40.867427] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:33.142 [2024-12-09 17:53:40.867480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178129 ] 00:05:33.142 [2024-12-09 17:53:40.976513] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2178089 has claimed it. 00:05:33.142 [2024-12-09 17:53:40.976547] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:33.710 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2178129) - No such process 00:05:33.710 ERROR: process (pid: 2178129) is no longer running 00:05:33.710 17:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.710 17:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:33.710 17:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:33.710 17:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.710 17:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:33.710 17:53:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.710 17:53:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2178089 00:05:33.710 17:53:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2178089 00:05:33.710 17:53:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.279 lslocks: write error 00:05:34.279 17:53:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2178089 00:05:34.279 17:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2178089 ']' 00:05:34.279 17:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2178089 00:05:34.279 17:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:34.279 17:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.279 17:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2178089 00:05:34.279 17:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.279 17:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.279 17:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2178089' 00:05:34.279 killing process with pid 2178089 00:05:34.279 17:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2178089 00:05:34.279 17:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2178089 00:05:34.538 00:05:34.538 real 0m2.553s 00:05:34.538 user 0m2.810s 00:05:34.538 sys 0m0.850s 00:05:34.538 17:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.538 17:53:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.538 ************************************ 00:05:34.538 END TEST locking_app_on_locked_coremask 00:05:34.538 ************************************ 00:05:34.797 17:53:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:34.797 17:53:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.797 17:53:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.797 17:53:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.797 ************************************ 00:05:34.797 START TEST locking_overlapped_coremask 00:05:34.797 ************************************ 00:05:34.797 17:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:34.797 17:53:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2178443 00:05:34.797 17:53:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2178443 /var/tmp/spdk.sock 00:05:34.797 17:53:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:34.797 17:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2178443 ']' 00:05:34.797 17:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.797 17:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.797 17:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.797 17:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.797 17:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.797 [2024-12-09 17:53:42.617395] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:34.797 [2024-12-09 17:53:42.617439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178443 ] 00:05:34.797 [2024-12-09 17:53:42.703854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.797 [2024-12-09 17:53:42.745374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.797 [2024-12-09 17:53:42.745484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.797 [2024-12-09 17:53:42.745483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.735 17:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.735 17:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:35.735 17:53:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:35.735 17:53:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2178693 00:05:35.735 17:53:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2178693 /var/tmp/spdk2.sock 00:05:35.735 17:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:35.735 17:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2178693 /var/tmp/spdk2.sock 00:05:35.735 17:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:35.735 17:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.735 17:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:35.735 17:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.735 17:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2178693 /var/tmp/spdk2.sock 00:05:35.735 17:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2178693 ']' 00:05:35.735 17:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.735 17:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.735 17:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.735 17:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.735 17:53:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.735 [2024-12-09 17:53:43.494019] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:35.735 [2024-12-09 17:53:43.494068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178693 ] 00:05:35.735 [2024-12-09 17:53:43.603119] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2178443 has claimed it. 00:05:35.735 [2024-12-09 17:53:43.603164] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:36.303 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2178693) - No such process 00:05:36.303 ERROR: process (pid: 2178693) is no longer running 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2178443 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2178443 ']' 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2178443 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2178443 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2178443' 00:05:36.303 killing process with pid 2178443 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2178443 00:05:36.303 17:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2178443 00:05:36.563 00:05:36.563 real 0m1.951s 00:05:36.563 user 0m5.613s 00:05:36.563 sys 0m0.443s 00:05:36.563 17:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.563 17:53:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.563 ************************************ 00:05:36.563 END TEST locking_overlapped_coremask 00:05:36.563 ************************************ 00:05:36.822 17:53:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:36.822 17:53:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.822 17:53:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.822 17:53:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.822 ************************************ 00:05:36.822 START TEST locking_overlapped_coremask_via_rpc 00:05:36.822 ************************************ 00:05:36.822 17:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:36.822 17:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2178983 00:05:36.822 17:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2178983 /var/tmp/spdk.sock 00:05:36.822 17:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:36.822 17:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2178983 ']' 00:05:36.822 17:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.822 17:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.822 17:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.822 17:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.822 17:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.822 [2024-12-09 17:53:44.653629] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:36.822 [2024-12-09 17:53:44.653677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178983 ] 00:05:36.822 [2024-12-09 17:53:44.745073] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.822 [2024-12-09 17:53:44.745098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:36.822 [2024-12-09 17:53:44.785119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.822 [2024-12-09 17:53:44.785232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.822 [2024-12-09 17:53:44.785232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.758 17:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.758 17:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:37.758 17:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:37.758 17:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2179009 00:05:37.758 17:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2179009 /var/tmp/spdk2.sock 00:05:37.758 17:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2179009 ']' 00:05:37.758 17:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.758 17:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.758 17:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.758 17:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.758 17:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.758 [2024-12-09 17:53:45.532018] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:37.758 [2024-12-09 17:53:45.532079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179009 ] 00:05:37.758 [2024-12-09 17:53:45.642665] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.758 [2024-12-09 17:53:45.642700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:37.758 [2024-12-09 17:53:45.727945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.758 [2024-12-09 17:53:45.728064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.758 [2024-12-09 17:53:45.728066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.693 [2024-12-09 17:53:46.398021] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2178983 has claimed it. 00:05:38.693 request: 00:05:38.693 { 00:05:38.693 "method": "framework_enable_cpumask_locks", 00:05:38.693 "req_id": 1 00:05:38.693 } 00:05:38.693 Got JSON-RPC error response 00:05:38.693 response: 00:05:38.693 { 00:05:38.693 "code": -32603, 00:05:38.693 "message": "Failed to claim CPU core: 2" 00:05:38.693 } 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2178983 /var/tmp/spdk.sock 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2178983 ']' 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2179009 /var/tmp/spdk2.sock 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2179009 ']' 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.693 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.952 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.952 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:38.952 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:38.952 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:38.952 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:38.952 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:38.952 00:05:38.952 real 0m2.216s 00:05:38.952 user 0m0.952s 00:05:38.952 sys 0m0.196s 00:05:38.952 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.952 17:53:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.952 ************************************ 00:05:38.952 END TEST locking_overlapped_coremask_via_rpc 00:05:38.952 ************************************ 00:05:38.952 17:53:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:38.952 17:53:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2178983 ]] 00:05:38.952 17:53:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2178983 00:05:38.952 17:53:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2178983 ']' 00:05:38.952 17:53:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2178983 00:05:38.952 17:53:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:38.952 17:53:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.952 17:53:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2178983 00:05:38.952 17:53:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.952 17:53:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.952 17:53:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2178983' 00:05:38.952 killing process with pid 2178983 00:05:38.952 17:53:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2178983 00:05:38.952 17:53:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2178983 00:05:39.520 17:53:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2179009 ]] 00:05:39.520 17:53:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2179009 00:05:39.520 17:53:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2179009 ']' 00:05:39.520 17:53:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2179009 00:05:39.520 17:53:47 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:39.520 17:53:47 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.520 17:53:47 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2179009 00:05:39.520 17:53:47 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:39.520 17:53:47 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:39.520 17:53:47 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2179009' 00:05:39.520 killing process with pid 2179009 00:05:39.520 17:53:47 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2179009 00:05:39.520 17:53:47 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2179009 00:05:39.779 17:53:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:39.779 17:53:47 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:39.779 17:53:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2178983 ]] 00:05:39.779 17:53:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2178983 00:05:39.779 17:53:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2178983 ']' 00:05:39.779 17:53:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2178983 00:05:39.779 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2178983) - No such process 00:05:39.779 17:53:47 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2178983 is not found' 00:05:39.779 Process with pid 2178983 is not found 00:05:39.779 17:53:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2179009 ]] 00:05:39.779 17:53:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2179009 00:05:39.779 17:53:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2179009 ']' 00:05:39.779 17:53:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2179009 00:05:39.779 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2179009) - No such process 00:05:39.779 17:53:47 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2179009 is not found' 00:05:39.779 Process with pid 2179009 is not found 00:05:39.779 17:53:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:39.779 00:05:39.779 real 0m18.880s 00:05:39.779 user 0m32.045s 00:05:39.779 sys 0m6.333s 00:05:39.779 17:53:47 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.779 17:53:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.779 ************************************ 00:05:39.779 END TEST cpu_locks 00:05:39.779 ************************************ 00:05:39.779 00:05:39.779 real 0m43.505s 00:05:39.779 user 1m20.171s 00:05:39.779 sys 0m10.553s 00:05:39.779 17:53:47 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.779 17:53:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.779 ************************************ 00:05:39.779 END TEST event 00:05:39.779 ************************************ 00:05:39.779 17:53:47 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:39.779 17:53:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.779 17:53:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.779 17:53:47 -- common/autotest_common.sh@10 -- # set +x 00:05:39.779 ************************************ 00:05:39.779 START TEST thread 00:05:39.779 ************************************ 00:05:39.779 17:53:47 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:40.039 * Looking for test storage... 00:05:40.039 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:05:40.039 17:53:47 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.039 17:53:47 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.039 17:53:47 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.039 17:53:47 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.039 17:53:47 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.039 17:53:47 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.039 17:53:47 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.039 17:53:47 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.039 17:53:47 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.039 17:53:47 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.039 17:53:47 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.039 17:53:47 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.039 17:53:47 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.039 17:53:47 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.039 17:53:47 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.039 17:53:47 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:40.039 17:53:47 thread -- scripts/common.sh@345 -- # : 1 00:05:40.039 17:53:47 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.039 17:53:47 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.039 17:53:47 thread -- scripts/common.sh@365 -- # decimal 1 00:05:40.039 17:53:47 thread -- scripts/common.sh@353 -- # local d=1 00:05:40.039 17:53:47 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.039 17:53:47 thread -- scripts/common.sh@355 -- # echo 1 00:05:40.039 17:53:47 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.039 17:53:47 thread -- scripts/common.sh@366 -- # decimal 2 00:05:40.039 17:53:47 thread -- scripts/common.sh@353 -- # local d=2 00:05:40.039 17:53:47 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.039 17:53:47 thread -- scripts/common.sh@355 -- # echo 2 00:05:40.039 17:53:47 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.039 17:53:47 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.039 17:53:47 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.039 17:53:47 thread -- scripts/common.sh@368 -- # return 0 00:05:40.039 17:53:47 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.039 17:53:47 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.039 --rc genhtml_branch_coverage=1 00:05:40.039 --rc genhtml_function_coverage=1 00:05:40.039 --rc genhtml_legend=1 00:05:40.039 --rc geninfo_all_blocks=1 00:05:40.039 --rc geninfo_unexecuted_blocks=1 00:05:40.039 00:05:40.039 ' 00:05:40.039 17:53:47 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.039 --rc genhtml_branch_coverage=1 00:05:40.039 --rc genhtml_function_coverage=1 00:05:40.039 --rc genhtml_legend=1 00:05:40.039 --rc geninfo_all_blocks=1 00:05:40.039 --rc geninfo_unexecuted_blocks=1 00:05:40.039 00:05:40.039 ' 00:05:40.039 17:53:47 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.039 --rc genhtml_branch_coverage=1 00:05:40.039 --rc genhtml_function_coverage=1 00:05:40.039 --rc genhtml_legend=1 00:05:40.039 --rc geninfo_all_blocks=1 00:05:40.039 --rc geninfo_unexecuted_blocks=1 00:05:40.039 00:05:40.040 ' 00:05:40.040 17:53:47 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.040 --rc genhtml_branch_coverage=1 00:05:40.040 --rc genhtml_function_coverage=1 00:05:40.040 --rc genhtml_legend=1 00:05:40.040 --rc geninfo_all_blocks=1 00:05:40.040 --rc geninfo_unexecuted_blocks=1 00:05:40.040 00:05:40.040 ' 00:05:40.040 17:53:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:40.040 17:53:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:40.040 17:53:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.040 17:53:47 thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.040 ************************************ 00:05:40.040 START TEST thread_poller_perf 00:05:40.040 ************************************ 00:05:40.040 17:53:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:40.040 [2024-12-09 17:53:47.999329] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:40.040 [2024-12-09 17:53:47.999399] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179637 ] 00:05:40.299 [2024-12-09 17:53:48.091644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.299 [2024-12-09 17:53:48.130040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.299 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:41.236 [2024-12-09T16:53:49.215Z] ====================================== 00:05:41.236 [2024-12-09T16:53:49.215Z] busy:2509379072 (cyc) 00:05:41.236 [2024-12-09T16:53:49.215Z] total_run_count: 436000 00:05:41.236 [2024-12-09T16:53:49.216Z] tsc_hz: 2500000000 (cyc) 00:05:41.237 [2024-12-09T16:53:49.216Z] ====================================== 00:05:41.237 [2024-12-09T16:53:49.216Z] poller_cost: 5755 (cyc), 2302 (nsec) 00:05:41.237 00:05:41.237 real 0m1.196s 00:05:41.237 user 0m1.093s 00:05:41.237 sys 0m0.098s 00:05:41.237 17:53:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.237 17:53:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.237 ************************************ 00:05:41.237 END TEST thread_poller_perf 00:05:41.237 ************************************ 00:05:41.504 17:53:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:41.504 17:53:49 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:41.504 17:53:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.504 17:53:49 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.504 ************************************ 00:05:41.504 START TEST thread_poller_perf 00:05:41.504 ************************************ 00:05:41.504 17:53:49 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:41.504 [2024-12-09 17:53:49.277071] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:41.504 [2024-12-09 17:53:49.277142] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179918 ] 00:05:41.504 [2024-12-09 17:53:49.369345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.504 [2024-12-09 17:53:49.407224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.504 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:42.545 [2024-12-09T16:53:50.524Z] ====================================== 00:05:42.545 [2024-12-09T16:53:50.524Z] busy:2501970236 (cyc) 00:05:42.545 [2024-12-09T16:53:50.524Z] total_run_count: 5201000 00:05:42.545 [2024-12-09T16:53:50.524Z] tsc_hz: 2500000000 (cyc) 00:05:42.545 [2024-12-09T16:53:50.524Z] ====================================== 00:05:42.545 [2024-12-09T16:53:50.524Z] poller_cost: 481 (cyc), 192 (nsec) 00:05:42.545 00:05:42.545 real 0m1.193s 00:05:42.545 user 0m1.100s 00:05:42.545 sys 0m0.089s 00:05:42.545 17:53:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.545 17:53:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.545 ************************************ 00:05:42.545 END TEST thread_poller_perf 00:05:42.545 ************************************ 00:05:42.545 17:53:50 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:42.545 00:05:42.545 real 0m2.747s 00:05:42.545 user 0m2.344s 00:05:42.545 sys 0m0.424s 00:05:42.545 17:53:50 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.545 17:53:50 thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.545 ************************************ 00:05:42.545 END TEST thread 00:05:42.545 ************************************ 00:05:42.804 17:53:50 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:42.804 17:53:50 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:42.804 17:53:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.804 17:53:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.804 17:53:50 -- common/autotest_common.sh@10 -- # set +x 00:05:42.804 ************************************ 00:05:42.804 START TEST app_cmdline 00:05:42.804 ************************************ 00:05:42.804 17:53:50 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:42.804 * Looking for test storage... 00:05:42.804 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:42.804 17:53:50 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:42.804 17:53:50 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:42.804 17:53:50 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:42.804 17:53:50 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.804 17:53:50 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:42.804 17:53:50 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.804 17:53:50 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.804 --rc genhtml_branch_coverage=1 00:05:42.804 --rc genhtml_function_coverage=1 00:05:42.804 --rc genhtml_legend=1 00:05:42.804 --rc geninfo_all_blocks=1 00:05:42.804 --rc geninfo_unexecuted_blocks=1 00:05:42.804 00:05:42.804 ' 00:05:42.804 17:53:50 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.804 --rc genhtml_branch_coverage=1 00:05:42.804 --rc genhtml_function_coverage=1 00:05:42.804 --rc genhtml_legend=1 00:05:42.804 --rc geninfo_all_blocks=1 00:05:42.804 --rc geninfo_unexecuted_blocks=1 00:05:42.804 00:05:42.804 ' 00:05:42.804 17:53:50 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.804 --rc genhtml_branch_coverage=1 00:05:42.804 --rc genhtml_function_coverage=1 00:05:42.804 --rc genhtml_legend=1 00:05:42.804 --rc geninfo_all_blocks=1 00:05:42.804 --rc geninfo_unexecuted_blocks=1 00:05:42.804 00:05:42.804 ' 00:05:42.804 17:53:50 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.804 --rc genhtml_branch_coverage=1 00:05:42.804 --rc genhtml_function_coverage=1 00:05:42.804 --rc genhtml_legend=1 00:05:42.804 --rc geninfo_all_blocks=1 00:05:42.804 --rc geninfo_unexecuted_blocks=1 00:05:42.804 00:05:42.804 ' 00:05:42.804 17:53:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:42.804 17:53:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2180246 00:05:42.804 17:53:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2180246 00:05:42.804 17:53:50 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:43.063 17:53:50 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2180246 ']' 00:05:43.063 17:53:50 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.063 17:53:50 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.063 17:53:50 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.063 17:53:50 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.063 17:53:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:43.063 [2024-12-09 17:53:50.831440] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:43.063 [2024-12-09 17:53:50.831496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180246 ] 00:05:43.063 [2024-12-09 17:53:50.903601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.063 [2024-12-09 17:53:50.944036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.322 17:53:51 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.322 17:53:51 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:43.322 17:53:51 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:43.582 { 00:05:43.582 "version": "SPDK v25.01-pre git sha1 2e1d23f4b", 00:05:43.582 "fields": { 00:05:43.582 "major": 25, 00:05:43.582 "minor": 1, 00:05:43.582 "patch": 0, 00:05:43.582 "suffix": "-pre", 00:05:43.582 "commit": "2e1d23f4b" 00:05:43.582 } 00:05:43.582 } 00:05:43.582 17:53:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:43.582 17:53:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:43.582 17:53:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:43.582 17:53:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:43.582 17:53:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:43.582 17:53:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:43.582 17:53:51 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.582 17:53:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:43.582 17:53:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:43.582 17:53:51 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.582 17:53:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:43.582 17:53:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:43.582 17:53:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:43.582 17:53:51 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:43.582 17:53:51 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:43.582 17:53:51 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:43.582 17:53:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.582 17:53:51 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:43.582 17:53:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.582 17:53:51 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:43.582 17:53:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.582 17:53:51 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:43.582 17:53:51 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:05:43.582 17:53:51 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:43.841 request: 00:05:43.841 { 00:05:43.841 "method": "env_dpdk_get_mem_stats", 00:05:43.841 "req_id": 1 00:05:43.841 } 00:05:43.841 Got JSON-RPC error response 00:05:43.841 response: 00:05:43.841 { 00:05:43.841 "code": -32601, 00:05:43.841 "message": "Method not found" 00:05:43.841 } 00:05:43.841 17:53:51 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:43.841 17:53:51 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:43.841 17:53:51 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:43.841 17:53:51 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:43.841 17:53:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2180246 00:05:43.841 17:53:51 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2180246 ']' 00:05:43.841 17:53:51 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2180246 00:05:43.841 17:53:51 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:43.841 17:53:51 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.841 17:53:51 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2180246 00:05:43.841 17:53:51 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.841 17:53:51 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.841 17:53:51 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2180246' 00:05:43.841 killing process with pid 2180246 00:05:43.841 17:53:51 app_cmdline -- common/autotest_common.sh@973 -- # kill 2180246 00:05:43.841 17:53:51 app_cmdline -- common/autotest_common.sh@978 -- # wait 2180246 00:05:44.101 00:05:44.101 real 0m1.362s 00:05:44.101 user 0m1.544s 00:05:44.101 sys 0m0.514s 00:05:44.101 17:53:51 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.101 17:53:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.101 ************************************ 00:05:44.101 END TEST app_cmdline 00:05:44.101 ************************************ 00:05:44.101 17:53:51 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:44.101 17:53:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.101 17:53:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.101 17:53:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.101 ************************************ 00:05:44.101 START TEST version 00:05:44.101 ************************************ 00:05:44.102 17:53:52 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:44.364 * Looking for test storage... 00:05:44.364 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:44.364 17:53:52 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:44.364 17:53:52 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:44.364 17:53:52 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:44.364 17:53:52 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:44.364 17:53:52 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.364 17:53:52 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.364 17:53:52 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.364 17:53:52 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.364 17:53:52 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.364 17:53:52 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.364 17:53:52 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.364 17:53:52 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.364 17:53:52 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.364 17:53:52 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.364 17:53:52 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.364 17:53:52 version -- scripts/common.sh@344 -- # case "$op" in 00:05:44.364 17:53:52 version -- scripts/common.sh@345 -- # : 1 00:05:44.364 17:53:52 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.364 17:53:52 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.364 17:53:52 version -- scripts/common.sh@365 -- # decimal 1 00:05:44.364 17:53:52 version -- scripts/common.sh@353 -- # local d=1 00:05:44.364 17:53:52 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.364 17:53:52 version -- scripts/common.sh@355 -- # echo 1 00:05:44.364 17:53:52 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.364 17:53:52 version -- scripts/common.sh@366 -- # decimal 2 00:05:44.364 17:53:52 version -- scripts/common.sh@353 -- # local d=2 00:05:44.364 17:53:52 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.364 17:53:52 version -- scripts/common.sh@355 -- # echo 2 00:05:44.364 17:53:52 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.364 17:53:52 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.364 17:53:52 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.364 17:53:52 version -- scripts/common.sh@368 -- # return 0 00:05:44.364 17:53:52 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.364 17:53:52 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:44.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.364 --rc genhtml_branch_coverage=1 00:05:44.364 --rc genhtml_function_coverage=1 00:05:44.364 --rc genhtml_legend=1 00:05:44.364 --rc geninfo_all_blocks=1 00:05:44.364 --rc geninfo_unexecuted_blocks=1 00:05:44.364 00:05:44.364 ' 00:05:44.364 17:53:52 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:44.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.364 --rc genhtml_branch_coverage=1 00:05:44.364 --rc genhtml_function_coverage=1 00:05:44.364 --rc genhtml_legend=1 00:05:44.364 --rc geninfo_all_blocks=1 00:05:44.364 --rc geninfo_unexecuted_blocks=1 00:05:44.364 00:05:44.364 ' 00:05:44.364 17:53:52 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:44.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.364 --rc genhtml_branch_coverage=1 00:05:44.364 --rc genhtml_function_coverage=1 00:05:44.364 --rc genhtml_legend=1 00:05:44.364 --rc geninfo_all_blocks=1 00:05:44.364 --rc geninfo_unexecuted_blocks=1 00:05:44.364 00:05:44.364 ' 00:05:44.364 17:53:52 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:44.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.364 --rc genhtml_branch_coverage=1 00:05:44.364 --rc genhtml_function_coverage=1 00:05:44.364 --rc genhtml_legend=1 00:05:44.364 --rc geninfo_all_blocks=1 00:05:44.364 --rc geninfo_unexecuted_blocks=1 00:05:44.364 00:05:44.364 ' 00:05:44.364 17:53:52 version -- app/version.sh@17 -- # get_header_version major 00:05:44.364 17:53:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:44.364 17:53:52 version -- app/version.sh@14 -- # cut -f2 00:05:44.364 17:53:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:44.364 17:53:52 version -- app/version.sh@17 -- # major=25 00:05:44.364 17:53:52 version -- app/version.sh@18 -- # get_header_version minor 00:05:44.364 17:53:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:44.364 17:53:52 version -- app/version.sh@14 -- # cut -f2 00:05:44.364 17:53:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:44.364 17:53:52 version -- app/version.sh@18 -- # minor=1 00:05:44.364 17:53:52 version -- app/version.sh@19 -- # get_header_version patch 00:05:44.364 17:53:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:44.364 17:53:52 version -- app/version.sh@14 -- # cut -f2 00:05:44.364 17:53:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:44.364 17:53:52 version -- app/version.sh@19 -- # patch=0 00:05:44.364 17:53:52 version -- app/version.sh@20 -- # get_header_version suffix 00:05:44.364 17:53:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:44.364 17:53:52 version -- app/version.sh@14 -- # cut -f2 00:05:44.364 17:53:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:44.364 17:53:52 version -- app/version.sh@20 -- # suffix=-pre 00:05:44.364 17:53:52 version -- app/version.sh@22 -- # version=25.1 00:05:44.364 17:53:52 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:44.364 17:53:52 version -- app/version.sh@28 -- # version=25.1rc0 00:05:44.364 17:53:52 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:05:44.364 17:53:52 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:44.364 17:53:52 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:44.364 17:53:52 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:44.364 00:05:44.364 real 0m0.277s 00:05:44.364 user 0m0.164s 00:05:44.364 sys 0m0.169s 00:05:44.364 17:53:52 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.364 17:53:52 version -- common/autotest_common.sh@10 -- # set +x 00:05:44.364 ************************************ 00:05:44.364 END TEST version 00:05:44.364 ************************************ 00:05:44.623 17:53:52 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:44.623 17:53:52 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:44.623 17:53:52 -- spdk/autotest.sh@194 -- # uname -s 00:05:44.623 17:53:52 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:44.623 17:53:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:44.623 17:53:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:44.623 17:53:52 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:44.623 17:53:52 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:44.623 17:53:52 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:44.623 17:53:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:44.623 17:53:52 -- common/autotest_common.sh@10 -- # set +x 00:05:44.623 17:53:52 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:44.623 17:53:52 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:44.623 17:53:52 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:44.623 17:53:52 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:44.623 17:53:52 -- spdk/autotest.sh@280 -- # '[' rdma = rdma ']' 00:05:44.623 17:53:52 -- spdk/autotest.sh@281 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:44.623 17:53:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:44.623 17:53:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.623 17:53:52 -- common/autotest_common.sh@10 -- # set +x 00:05:44.623 ************************************ 00:05:44.623 START TEST nvmf_rdma 00:05:44.623 ************************************ 00:05:44.623 17:53:52 nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:44.623 * Looking for test storage... 00:05:44.623 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:05:44.624 17:53:52 nvmf_rdma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:44.624 17:53:52 nvmf_rdma -- common/autotest_common.sh@1711 -- # lcov --version 00:05:44.624 17:53:52 nvmf_rdma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:44.883 17:53:52 nvmf_rdma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:44.883 17:53:52 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.883 17:53:52 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.883 17:53:52 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.883 17:53:52 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.883 17:53:52 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.883 17:53:52 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.883 17:53:52 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.883 17:53:52 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.883 17:53:52 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.883 17:53:52 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.883 17:53:52 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.883 17:53:52 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:05:44.884 17:53:52 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:05:44.884 17:53:52 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.884 17:53:52 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.884 17:53:52 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:05:44.884 17:53:52 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:05:44.884 17:53:52 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.884 17:53:52 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:05:44.884 17:53:52 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.884 17:53:52 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:05:44.884 17:53:52 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:05:44.884 17:53:52 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.884 17:53:52 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:05:44.884 17:53:52 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.884 17:53:52 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.884 17:53:52 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.884 17:53:52 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:05:44.884 17:53:52 nvmf_rdma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.884 17:53:52 nvmf_rdma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:44.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.884 --rc genhtml_branch_coverage=1 00:05:44.884 --rc genhtml_function_coverage=1 00:05:44.884 --rc genhtml_legend=1 00:05:44.884 --rc geninfo_all_blocks=1 00:05:44.884 --rc geninfo_unexecuted_blocks=1 00:05:44.884 00:05:44.884 ' 00:05:44.884 17:53:52 nvmf_rdma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:44.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.884 --rc genhtml_branch_coverage=1 00:05:44.884 --rc genhtml_function_coverage=1 00:05:44.884 --rc genhtml_legend=1 00:05:44.884 --rc geninfo_all_blocks=1 00:05:44.884 --rc geninfo_unexecuted_blocks=1 00:05:44.884 00:05:44.884 ' 00:05:44.884 17:53:52 nvmf_rdma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:44.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.884 --rc genhtml_branch_coverage=1 00:05:44.884 --rc genhtml_function_coverage=1 00:05:44.884 --rc genhtml_legend=1 00:05:44.884 --rc geninfo_all_blocks=1 00:05:44.884 --rc geninfo_unexecuted_blocks=1 00:05:44.884 00:05:44.884 ' 00:05:44.884 17:53:52 nvmf_rdma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:44.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.884 --rc genhtml_branch_coverage=1 00:05:44.884 --rc genhtml_function_coverage=1 00:05:44.884 --rc genhtml_legend=1 00:05:44.884 --rc geninfo_all_blocks=1 00:05:44.884 --rc geninfo_unexecuted_blocks=1 00:05:44.884 00:05:44.884 ' 00:05:44.884 17:53:52 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:05:44.884 17:53:52 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:44.884 17:53:52 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:05:44.884 17:53:52 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:44.884 17:53:52 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.884 17:53:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:44.884 ************************************ 00:05:44.884 START TEST nvmf_target_core 00:05:44.884 ************************************ 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:05:44.884 * Looking for test storage... 00:05:44.884 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.884 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:45.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.144 --rc genhtml_branch_coverage=1 00:05:45.144 --rc genhtml_function_coverage=1 00:05:45.144 --rc genhtml_legend=1 00:05:45.144 --rc geninfo_all_blocks=1 00:05:45.144 --rc geninfo_unexecuted_blocks=1 00:05:45.144 00:05:45.144 ' 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:45.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.144 --rc genhtml_branch_coverage=1 00:05:45.144 --rc genhtml_function_coverage=1 00:05:45.144 --rc genhtml_legend=1 00:05:45.144 --rc geninfo_all_blocks=1 00:05:45.144 --rc geninfo_unexecuted_blocks=1 00:05:45.144 00:05:45.144 ' 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:45.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.144 --rc genhtml_branch_coverage=1 00:05:45.144 --rc genhtml_function_coverage=1 00:05:45.144 --rc genhtml_legend=1 00:05:45.144 --rc geninfo_all_blocks=1 00:05:45.144 --rc geninfo_unexecuted_blocks=1 00:05:45.144 00:05:45.144 ' 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:45.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.144 --rc genhtml_branch_coverage=1 00:05:45.144 --rc genhtml_function_coverage=1 00:05:45.144 --rc genhtml_legend=1 00:05:45.144 --rc geninfo_all_blocks=1 00:05:45.144 --rc geninfo_unexecuted_blocks=1 00:05:45.144 00:05:45.144 ' 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:45.144 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:45.144 ************************************ 00:05:45.144 START TEST nvmf_abort 00:05:45.144 ************************************ 00:05:45.144 17:53:52 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:05:45.144 * Looking for test storage... 00:05:45.144 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:45.144 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.144 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.144 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:45.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.405 --rc genhtml_branch_coverage=1 00:05:45.405 --rc genhtml_function_coverage=1 00:05:45.405 --rc genhtml_legend=1 00:05:45.405 --rc geninfo_all_blocks=1 00:05:45.405 --rc geninfo_unexecuted_blocks=1 00:05:45.405 00:05:45.405 ' 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:45.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.405 --rc genhtml_branch_coverage=1 00:05:45.405 --rc genhtml_function_coverage=1 00:05:45.405 --rc genhtml_legend=1 00:05:45.405 --rc geninfo_all_blocks=1 00:05:45.405 --rc geninfo_unexecuted_blocks=1 00:05:45.405 00:05:45.405 ' 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:45.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.405 --rc genhtml_branch_coverage=1 00:05:45.405 --rc genhtml_function_coverage=1 00:05:45.405 --rc genhtml_legend=1 00:05:45.405 --rc geninfo_all_blocks=1 00:05:45.405 --rc geninfo_unexecuted_blocks=1 00:05:45.405 00:05:45.405 ' 00:05:45.405 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:45.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.405 --rc genhtml_branch_coverage=1 00:05:45.405 --rc genhtml_function_coverage=1 00:05:45.405 --rc genhtml_legend=1 00:05:45.405 --rc geninfo_all_blocks=1 00:05:45.405 --rc geninfo_unexecuted_blocks=1 00:05:45.405 00:05:45.405 ' 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:45.406 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:45.406 17:53:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:05:53.531 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:05:53.531 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:05:53.531 Found net devices under 0000:d9:00.0: mlx_0_0 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:05:53.531 Found net devices under 0000:d9:00.1: mlx_0_1 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:53.531 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:53.532 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:53.532 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:05:53.532 altname enp217s0f0np0 00:05:53.532 altname ens818f0np0 00:05:53.532 inet 192.168.100.8/24 scope global mlx_0_0 00:05:53.532 valid_lft forever preferred_lft forever 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:53.532 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:53.532 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:05:53.532 altname enp217s0f1np1 00:05:53.532 altname ens818f1np1 00:05:53.532 inet 192.168.100.9/24 scope global mlx_0_1 00:05:53.532 valid_lft forever preferred_lft forever 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:53.532 192.168.100.9' 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:53.532 192.168.100.9' 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:53.532 192.168.100.9' 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2184171 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2184171 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2184171 ']' 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.532 17:54:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.532 [2024-12-09 17:54:00.531542] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:05:53.532 [2024-12-09 17:54:00.531602] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:53.532 [2024-12-09 17:54:00.626997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.532 [2024-12-09 17:54:00.667852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:53.532 [2024-12-09 17:54:00.667896] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:53.532 [2024-12-09 17:54:00.667905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.532 [2024-12-09 17:54:00.667914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.532 [2024-12-09 17:54:00.667937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:53.532 [2024-12-09 17:54:00.669451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.532 [2024-12-09 17:54:00.669586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.532 [2024-12-09 17:54:00.669587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.532 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.532 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:53.533 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:53.533 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:53.533 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.533 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:53.533 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:05:53.533 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.533 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.533 [2024-12-09 17:54:01.460651] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xaa60c0/0xaaa5b0) succeed. 00:05:53.533 [2024-12-09 17:54:01.478163] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xaa76b0/0xaebc50) succeed. 00:05:53.791 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.792 Malloc0 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.792 Delay0 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.792 [2024-12-09 17:54:01.657107] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.792 17:54:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:54.051 [2024-12-09 17:54:01.786095] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:55.954 Initializing NVMe Controllers 00:05:55.954 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:05:55.954 controller IO queue size 128 less than required 00:05:55.954 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:55.954 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:55.954 Initialization complete. Launching workers. 00:05:55.954 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42773 00:05:55.954 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42834, failed to submit 62 00:05:55.954 success 42774, unsuccessful 60, failed 0 00:05:55.954 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:55.954 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.954 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:55.954 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.954 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:55.954 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:55.954 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:55.954 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:55.954 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:05:55.954 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:05:55.954 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:55.954 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:55.954 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:05:55.954 rmmod nvme_rdma 00:05:56.214 rmmod nvme_fabrics 00:05:56.214 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:56.214 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:56.214 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:56.214 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2184171 ']' 00:05:56.214 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2184171 00:05:56.214 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2184171 ']' 00:05:56.214 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2184171 00:05:56.214 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:56.214 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.214 17:54:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2184171 00:05:56.214 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:56.214 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:56.214 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2184171' 00:05:56.214 killing process with pid 2184171 00:05:56.214 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2184171 00:05:56.214 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2184171 00:05:56.473 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:56.473 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:05:56.473 00:05:56.473 real 0m11.320s 00:05:56.473 user 0m15.084s 00:05:56.473 sys 0m6.113s 00:05:56.473 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.473 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.473 ************************************ 00:05:56.473 END TEST nvmf_abort 00:05:56.474 ************************************ 00:05:56.474 17:54:04 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:05:56.474 17:54:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:56.474 17:54:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.474 17:54:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:56.474 ************************************ 00:05:56.474 START TEST nvmf_ns_hotplug_stress 00:05:56.474 ************************************ 00:05:56.474 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:05:56.734 * Looking for test storage... 00:05:56.734 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:56.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.734 --rc genhtml_branch_coverage=1 00:05:56.734 --rc genhtml_function_coverage=1 00:05:56.734 --rc genhtml_legend=1 00:05:56.734 --rc geninfo_all_blocks=1 00:05:56.734 --rc geninfo_unexecuted_blocks=1 00:05:56.734 00:05:56.734 ' 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:56.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.734 --rc genhtml_branch_coverage=1 00:05:56.734 --rc genhtml_function_coverage=1 00:05:56.734 --rc genhtml_legend=1 00:05:56.734 --rc geninfo_all_blocks=1 00:05:56.734 --rc geninfo_unexecuted_blocks=1 00:05:56.734 00:05:56.734 ' 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:56.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.734 --rc genhtml_branch_coverage=1 00:05:56.734 --rc genhtml_function_coverage=1 00:05:56.734 --rc genhtml_legend=1 00:05:56.734 --rc geninfo_all_blocks=1 00:05:56.734 --rc geninfo_unexecuted_blocks=1 00:05:56.734 00:05:56.734 ' 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:56.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.734 --rc genhtml_branch_coverage=1 00:05:56.734 --rc genhtml_function_coverage=1 00:05:56.734 --rc genhtml_legend=1 00:05:56.734 --rc geninfo_all_blocks=1 00:05:56.734 --rc geninfo_unexecuted_blocks=1 00:05:56.734 00:05:56.734 ' 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.734 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:56.735 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:56.735 17:54:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:04.862 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:04.862 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:04.862 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:04.863 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:04.863 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:04.863 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:04.863 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:04.863 altname enp217s0f0np0 00:06:04.863 altname ens818f0np0 00:06:04.863 inet 192.168.100.8/24 scope global mlx_0_0 00:06:04.863 valid_lft forever preferred_lft forever 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:04.863 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:04.863 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:04.863 altname enp217s0f1np1 00:06:04.863 altname ens818f1np1 00:06:04.863 inet 192.168.100.9/24 scope global mlx_0_1 00:06:04.863 valid_lft forever preferred_lft forever 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:04.863 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:04.864 192.168.100.9' 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:04.864 192.168.100.9' 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:04.864 192.168.100.9' 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2188682 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2188682 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2188682 ']' 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.864 17:54:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.864 [2024-12-09 17:54:11.984843] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:06:04.864 [2024-12-09 17:54:11.984901] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:04.864 [2024-12-09 17:54:12.074970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.864 [2024-12-09 17:54:12.114380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:04.864 [2024-12-09 17:54:12.114418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:04.864 [2024-12-09 17:54:12.114427] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:04.864 [2024-12-09 17:54:12.114435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:04.864 [2024-12-09 17:54:12.114442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:04.864 [2024-12-09 17:54:12.116022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.864 [2024-12-09 17:54:12.116130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.864 [2024-12-09 17:54:12.116131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.864 17:54:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.864 17:54:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:04.864 17:54:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:04.864 17:54:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:04.864 17:54:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:05.123 17:54:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:05.123 17:54:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:05.123 17:54:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:05.123 [2024-12-09 17:54:13.049568] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ba70c0/0x1bab5b0) succeed. 00:06:05.123 [2024-12-09 17:54:13.058896] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ba86b0/0x1becc50) succeed. 00:06:05.383 17:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:05.642 17:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:05.642 [2024-12-09 17:54:13.546016] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:05.642 17:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:05.901 17:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:06.160 Malloc0 00:06:06.160 17:54:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:06.160 Delay0 00:06:06.419 17:54:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.419 17:54:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:06.678 NULL1 00:06:06.678 17:54:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:06.937 17:54:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2189245 00:06:06.937 17:54:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:06.937 17:54:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:06.937 17:54:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.314 Read completed with error (sct=0, sc=11) 00:06:08.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.314 17:54:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.315 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.315 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.315 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.315 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:08.315 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:08.573 true 00:06:08.573 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:08.573 17:54:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.510 17:54:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.510 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.510 17:54:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:09.510 17:54:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:09.768 true 00:06:09.768 17:54:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:09.768 17:54:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.703 17:54:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.703 17:54:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:10.703 17:54:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:10.961 true 00:06:10.961 17:54:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:10.961 17:54:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.898 17:54:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.898 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.898 17:54:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:11.898 17:54:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:12.157 true 00:06:12.157 17:54:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:12.157 17:54:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.093 17:54:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.093 17:54:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:13.093 17:54:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:13.352 true 00:06:13.352 17:54:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:13.352 17:54:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.289 17:54:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.289 17:54:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:14.289 17:54:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:14.548 true 00:06:14.548 17:54:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:14.548 17:54:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.808 17:54:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.808 17:54:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:14.808 17:54:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:15.067 true 00:06:15.067 17:54:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:15.067 17:54:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.445 17:54:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.445 17:54:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:16.445 17:54:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:16.445 true 00:06:16.445 17:54:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:16.445 17:54:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.382 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.640 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:17.640 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:17.640 true 00:06:17.899 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:17.899 17:54:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.466 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.724 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:18.724 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:18.983 true 00:06:18.983 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:18.983 17:54:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.920 17:54:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.920 17:54:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:19.920 17:54:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:20.179 true 00:06:20.179 17:54:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:20.179 17:54:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.116 17:54:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.116 17:54:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:21.116 17:54:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:21.375 true 00:06:21.375 17:54:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:21.375 17:54:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.312 17:54:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.312 17:54:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:22.312 17:54:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:22.605 true 00:06:22.605 17:54:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:22.606 17:54:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.890 17:54:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.890 17:54:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:22.890 17:54:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:23.159 true 00:06:23.159 17:54:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:23.159 17:54:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.535 17:54:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.535 17:54:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:24.535 17:54:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:24.535 true 00:06:24.797 17:54:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:24.797 17:54:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.365 17:54:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.623 17:54:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:25.623 17:54:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:25.882 true 00:06:25.882 17:54:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:25.882 17:54:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.819 17:54:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.819 17:54:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:26.819 17:54:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:27.078 true 00:06:27.078 17:54:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:27.078 17:54:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.012 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.013 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:28.013 17:54:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:28.290 true 00:06:28.290 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:28.290 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.224 17:54:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.224 17:54:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:29.224 17:54:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:29.484 true 00:06:29.484 17:54:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:29.484 17:54:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.421 17:54:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.421 17:54:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:30.421 17:54:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:30.680 true 00:06:30.680 17:54:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:30.680 17:54:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.939 17:54:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.198 17:54:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:31.198 17:54:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:31.198 true 00:06:31.198 17:54:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:31.198 17:54:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.575 17:54:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.575 17:54:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:32.575 17:54:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:32.834 true 00:06:32.834 17:54:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:32.834 17:54:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.770 17:54:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.770 17:54:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:33.770 17:54:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:34.029 true 00:06:34.029 17:54:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:34.029 17:54:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.966 17:54:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.966 17:54:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:34.966 17:54:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:35.225 true 00:06:35.225 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:35.225 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.162 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.162 17:54:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.162 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.162 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.162 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.162 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.162 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.162 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.162 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:36.162 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:36.421 true 00:06:36.421 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:36.421 17:54:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.357 17:54:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.357 17:54:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:37.357 17:54:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:37.617 true 00:06:37.617 17:54:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:37.617 17:54:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.876 17:54:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.876 17:54:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:37.876 17:54:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:38.135 true 00:06:38.136 17:54:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:38.136 17:54:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.395 17:54:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.654 17:54:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:38.654 17:54:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:38.654 true 00:06:38.654 17:54:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:38.654 17:54:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.913 17:54:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.172 17:54:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:39.172 17:54:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:39.172 Initializing NVMe Controllers 00:06:39.172 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:39.172 Controller IO queue size 128, less than required. 00:06:39.172 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:39.172 Controller IO queue size 128, less than required. 00:06:39.172 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:39.172 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:39.172 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:39.172 Initialization complete. Launching workers. 00:06:39.172 ======================================================== 00:06:39.172 Latency(us) 00:06:39.172 Device Information : IOPS MiB/s Average min max 00:06:39.172 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5291.67 2.58 21839.58 812.28 1007147.99 00:06:39.172 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35531.20 17.35 3602.27 1794.46 288242.57 00:06:39.172 ======================================================== 00:06:39.172 Total : 40822.87 19.93 5966.28 812.28 1007147.99 00:06:39.172 00:06:39.431 true 00:06:39.431 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189245 00:06:39.431 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2189245) - No such process 00:06:39.431 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2189245 00:06:39.431 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.431 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.691 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:39.691 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:39.691 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:39.691 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:39.691 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:39.950 null0 00:06:39.950 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:39.950 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:39.950 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:39.950 null1 00:06:40.209 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:40.209 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:40.209 17:54:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:40.209 null2 00:06:40.209 17:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:40.209 17:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:40.209 17:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:40.468 null3 00:06:40.468 17:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:40.468 17:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:40.468 17:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:40.728 null4 00:06:40.728 17:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:40.728 17:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:40.728 17:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:40.728 null5 00:06:40.987 17:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:40.987 17:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:40.987 17:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:40.987 null6 00:06:40.987 17:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:40.987 17:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:40.987 17:54:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:41.247 null7 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2195204 2195206 2195207 2195209 2195211 2195213 2195214 2195216 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:41.247 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:41.248 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.248 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:41.507 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.507 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.507 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:41.507 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:41.507 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:41.507 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:41.507 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:41.507 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:41.766 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.766 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.766 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:41.766 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.766 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.766 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:41.766 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.766 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:41.767 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.025 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.025 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:42.025 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.025 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:42.025 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.025 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.025 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:42.025 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.025 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.025 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:42.025 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.025 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.025 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.026 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.026 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.026 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:42.026 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.026 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.026 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.026 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.026 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.026 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:42.026 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.026 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.026 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:42.026 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.026 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.026 17:54:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:42.285 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:42.285 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.285 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.285 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.285 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:42.285 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:42.285 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.285 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.545 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:42.804 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.804 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:42.804 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.804 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:42.804 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.805 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:43.064 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:43.064 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:43.064 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:43.064 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:43.064 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:43.064 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.064 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:43.064 17:54:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:43.323 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.323 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.323 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:43.323 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.323 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.323 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:43.323 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.323 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.323 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.323 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:43.323 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.323 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.323 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:43.323 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.324 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:43.324 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.324 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.324 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:43.324 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.324 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.324 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:43.324 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.324 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.324 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:43.583 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:43.583 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:43.583 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:43.583 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.583 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:43.583 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:43.583 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:43.583 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:43.842 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.101 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.101 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.101 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.101 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.101 17:54:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.101 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.101 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.101 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.101 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.101 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.101 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.101 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.101 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.101 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.101 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.101 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.101 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.101 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.102 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.102 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.102 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.102 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.102 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.102 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.361 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:44.361 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:44.361 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.361 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:44.361 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:44.361 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:44.361 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:44.361 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.621 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.880 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:44.880 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:44.880 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:44.880 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.880 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.881 17:54:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:45.140 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.140 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.140 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.140 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.140 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.140 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:45.140 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:45.140 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.463 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.463 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:45.464 rmmod nvme_rdma 00:06:45.464 rmmod nvme_fabrics 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2188682 ']' 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2188682 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2188682 ']' 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2188682 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2188682 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2188682' 00:06:45.464 killing process with pid 2188682 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2188682 00:06:45.464 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2188682 00:06:45.737 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:45.737 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:06:45.737 00:06:45.737 real 0m49.244s 00:06:45.737 user 3m21.719s 00:06:45.737 sys 0m14.368s 00:06:45.737 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.737 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:45.737 ************************************ 00:06:45.737 END TEST nvmf_ns_hotplug_stress 00:06:45.737 ************************************ 00:06:45.737 17:54:53 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:06:45.737 17:54:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:45.737 17:54:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.737 17:54:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:45.737 ************************************ 00:06:45.737 START TEST nvmf_delete_subsystem 00:06:45.737 ************************************ 00:06:45.737 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:06:45.997 * Looking for test storage... 00:06:45.997 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:45.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.997 --rc genhtml_branch_coverage=1 00:06:45.997 --rc genhtml_function_coverage=1 00:06:45.997 --rc genhtml_legend=1 00:06:45.997 --rc geninfo_all_blocks=1 00:06:45.997 --rc geninfo_unexecuted_blocks=1 00:06:45.997 00:06:45.997 ' 00:06:45.997 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:45.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.997 --rc genhtml_branch_coverage=1 00:06:45.997 --rc genhtml_function_coverage=1 00:06:45.997 --rc genhtml_legend=1 00:06:45.997 --rc geninfo_all_blocks=1 00:06:45.997 --rc geninfo_unexecuted_blocks=1 00:06:45.997 00:06:45.998 ' 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:45.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.998 --rc genhtml_branch_coverage=1 00:06:45.998 --rc genhtml_function_coverage=1 00:06:45.998 --rc genhtml_legend=1 00:06:45.998 --rc geninfo_all_blocks=1 00:06:45.998 --rc geninfo_unexecuted_blocks=1 00:06:45.998 00:06:45.998 ' 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:45.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.998 --rc genhtml_branch_coverage=1 00:06:45.998 --rc genhtml_function_coverage=1 00:06:45.998 --rc genhtml_legend=1 00:06:45.998 --rc geninfo_all_blocks=1 00:06:45.998 --rc geninfo_unexecuted_blocks=1 00:06:45.998 00:06:45.998 ' 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:45.998 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:45.998 17:54:53 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.126 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:54.126 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:54.126 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:54.126 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:54.127 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:54.127 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:54.127 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:54.127 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:54.127 17:55:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:54.127 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:54.128 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:54.128 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:54.128 altname enp217s0f0np0 00:06:54.128 altname ens818f0np0 00:06:54.128 inet 192.168.100.8/24 scope global mlx_0_0 00:06:54.128 valid_lft forever preferred_lft forever 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:54.128 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:54.128 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:54.128 altname enp217s0f1np1 00:06:54.128 altname ens818f1np1 00:06:54.128 inet 192.168.100.9/24 scope global mlx_0_1 00:06:54.128 valid_lft forever preferred_lft forever 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:54.128 192.168.100.9' 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:54.128 192.168.100.9' 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:54.128 192.168.100.9' 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2199602 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2199602 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2199602 ']' 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.128 17:55:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.128 [2024-12-09 17:55:01.257648] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:06:54.128 [2024-12-09 17:55:01.257707] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.128 [2024-12-09 17:55:01.352073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.128 [2024-12-09 17:55:01.390802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:54.128 [2024-12-09 17:55:01.390841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:54.128 [2024-12-09 17:55:01.390851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:54.128 [2024-12-09 17:55:01.390859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:54.128 [2024-12-09 17:55:01.390867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:54.128 [2024-12-09 17:55:01.392224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.128 [2024-12-09 17:55:01.392225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.128 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.128 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:54.128 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:54.128 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:54.128 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.388 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:54.388 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:54.388 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.388 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.388 [2024-12-09 17:55:02.164345] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23c6200/0x23ca6f0) succeed. 00:06:54.388 [2024-12-09 17:55:02.173027] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23c7750/0x240bd90) succeed. 00:06:54.388 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.388 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:54.388 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.388 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.388 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.388 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:54.388 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.388 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.388 [2024-12-09 17:55:02.260071] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:54.388 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.388 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:54.388 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.388 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.388 NULL1 00:06:54.389 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.389 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:54.389 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.389 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.389 Delay0 00:06:54.389 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.389 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.389 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.389 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.389 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.389 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2199883 00:06:54.389 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:54.389 17:55:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:54.647 [2024-12-09 17:55:02.399227] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:56.549 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:56.549 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.549 17:55:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.485 NVMe io qpair process completion error 00:06:57.485 NVMe io qpair process completion error 00:06:57.743 NVMe io qpair process completion error 00:06:57.743 NVMe io qpair process completion error 00:06:57.743 NVMe io qpair process completion error 00:06:57.743 NVMe io qpair process completion error 00:06:57.743 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.743 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:57.743 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2199883 00:06:57.743 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:58.310 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:58.310 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2199883 00:06:58.310 17:55:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Write completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Write completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Write completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Write completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Write completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Write completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Write completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Write completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Write completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Write completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Write completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Write completed with error (sct=0, sc=8) 00:06:58.569 starting I/O failed: -6 00:06:58.569 Write completed with error (sct=0, sc=8) 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 Write completed with error (sct=0, sc=8) 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 Write completed with error (sct=0, sc=8) 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 Read completed with error (sct=0, sc=8) 00:06:58.569 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 starting I/O failed: -6 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Write completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.570 Read completed with error (sct=0, sc=8) 00:06:58.571 Read completed with error (sct=0, sc=8) 00:06:58.571 Read completed with error (sct=0, sc=8) 00:06:58.571 Write completed with error (sct=0, sc=8) 00:06:58.571 Read completed with error (sct=0, sc=8) 00:06:58.571 Read completed with error (sct=0, sc=8) 00:06:58.571 Read completed with error (sct=0, sc=8) 00:06:58.571 Read completed with error (sct=0, sc=8) 00:06:58.571 Read completed with error (sct=0, sc=8) 00:06:58.571 Read completed with error (sct=0, sc=8) 00:06:58.571 Read completed with error (sct=0, sc=8) 00:06:58.571 Read completed with error (sct=0, sc=8) 00:06:58.571 Read completed with error (sct=0, sc=8) 00:06:58.571 Write completed with error (sct=0, sc=8) 00:06:58.571 Write completed with error (sct=0, sc=8) 00:06:58.571 Read completed with error (sct=0, sc=8) 00:06:58.571 Read completed with error (sct=0, sc=8) 00:06:58.571 Read completed with error (sct=0, sc=8) 00:06:58.571 Read completed with error (sct=0, sc=8) 00:06:58.571 Read completed with error (sct=0, sc=8) 00:06:58.571 Write completed with error (sct=0, sc=8) 00:06:58.571 Write completed with error (sct=0, sc=8) 00:06:58.571 Read completed with error (sct=0, sc=8) 00:06:58.571 Read completed with error (sct=0, sc=8) 00:06:58.571 Initializing NVMe Controllers 00:06:58.571 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:58.571 Controller IO queue size 128, less than required. 00:06:58.571 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:58.571 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:58.571 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:58.571 Initialization complete. Launching workers. 00:06:58.571 ======================================================== 00:06:58.571 Latency(us) 00:06:58.571 Device Information : IOPS MiB/s Average min max 00:06:58.571 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.59 0.04 1591797.59 1000132.63 2969283.61 00:06:58.571 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.59 0.04 1593408.73 1000756.16 2970958.69 00:06:58.571 ======================================================== 00:06:58.571 Total : 161.18 0.08 1592603.16 1000132.63 2970958.69 00:06:58.571 00:06:58.571 17:55:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:58.571 17:55:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2199883 00:06:58.571 17:55:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:58.571 [2024-12-09 17:55:06.508268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:06:58.571 [2024-12-09 17:55:06.508309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:06:58.571 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:59.138 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:59.138 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2199883 00:06:59.138 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2199883) - No such process 00:06:59.138 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2199883 00:06:59.138 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:59.138 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2199883 00:06:59.138 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2199883 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.139 [2024-12-09 17:55:07.028336] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2200694 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200694 00:06:59.139 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:59.397 [2024-12-09 17:55:07.145079] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:59.656 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:59.656 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200694 00:06:59.656 17:55:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:00.224 17:55:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:00.224 17:55:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200694 00:07:00.224 17:55:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:00.791 17:55:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:00.791 17:55:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200694 00:07:00.791 17:55:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:01.358 17:55:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:01.358 17:55:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200694 00:07:01.358 17:55:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:01.617 17:55:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:01.617 17:55:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200694 00:07:01.617 17:55:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:02.184 17:55:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:02.184 17:55:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200694 00:07:02.184 17:55:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:02.751 17:55:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:02.751 17:55:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200694 00:07:02.751 17:55:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:03.319 17:55:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:03.319 17:55:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200694 00:07:03.319 17:55:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:03.884 17:55:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:03.884 17:55:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200694 00:07:03.884 17:55:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:04.142 17:55:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:04.142 17:55:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200694 00:07:04.142 17:55:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:04.710 17:55:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:04.710 17:55:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200694 00:07:04.710 17:55:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:05.278 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:05.278 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200694 00:07:05.278 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:05.847 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:05.847 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200694 00:07:05.847 17:55:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:06.415 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:06.415 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200694 00:07:06.415 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:06.415 Initializing NVMe Controllers 00:07:06.415 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:06.415 Controller IO queue size 128, less than required. 00:07:06.415 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:06.415 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:06.415 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:06.415 Initialization complete. Launching workers. 00:07:06.415 ======================================================== 00:07:06.415 Latency(us) 00:07:06.415 Device Information : IOPS MiB/s Average min max 00:07:06.415 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001551.47 1000067.50 1004532.46 00:07:06.415 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002754.34 1000073.77 1005865.90 00:07:06.415 ======================================================== 00:07:06.415 Total : 256.00 0.12 1002152.90 1000067.50 1005865.90 00:07:06.415 00:07:06.675 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:06.675 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2200694 00:07:06.675 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2200694) - No such process 00:07:06.675 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2200694 00:07:06.675 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:06.675 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:06.675 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:06.675 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:06.675 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:06.675 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:06.675 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:06.675 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:06.675 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:06.675 rmmod nvme_rdma 00:07:06.675 rmmod nvme_fabrics 00:07:06.934 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:06.934 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:06.934 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:06.934 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2199602 ']' 00:07:06.934 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2199602 00:07:06.934 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2199602 ']' 00:07:06.934 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2199602 00:07:06.934 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:06.934 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.934 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2199602 00:07:06.934 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.934 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.934 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2199602' 00:07:06.934 killing process with pid 2199602 00:07:06.934 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2199602 00:07:06.934 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2199602 00:07:07.194 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:07.194 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:07.194 00:07:07.194 real 0m21.279s 00:07:07.194 user 0m50.530s 00:07:07.194 sys 0m6.927s 00:07:07.194 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.194 17:55:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.194 ************************************ 00:07:07.194 END TEST nvmf_delete_subsystem 00:07:07.194 ************************************ 00:07:07.194 17:55:14 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:07:07.194 17:55:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:07.194 17:55:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.194 17:55:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:07.194 ************************************ 00:07:07.194 START TEST nvmf_host_management 00:07:07.194 ************************************ 00:07:07.194 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:07:07.194 * Looking for test storage... 00:07:07.194 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:07.194 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:07.194 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:07.194 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:07.454 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:07.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.455 --rc genhtml_branch_coverage=1 00:07:07.455 --rc genhtml_function_coverage=1 00:07:07.455 --rc genhtml_legend=1 00:07:07.455 --rc geninfo_all_blocks=1 00:07:07.455 --rc geninfo_unexecuted_blocks=1 00:07:07.455 00:07:07.455 ' 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:07.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.455 --rc genhtml_branch_coverage=1 00:07:07.455 --rc genhtml_function_coverage=1 00:07:07.455 --rc genhtml_legend=1 00:07:07.455 --rc geninfo_all_blocks=1 00:07:07.455 --rc geninfo_unexecuted_blocks=1 00:07:07.455 00:07:07.455 ' 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:07.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.455 --rc genhtml_branch_coverage=1 00:07:07.455 --rc genhtml_function_coverage=1 00:07:07.455 --rc genhtml_legend=1 00:07:07.455 --rc geninfo_all_blocks=1 00:07:07.455 --rc geninfo_unexecuted_blocks=1 00:07:07.455 00:07:07.455 ' 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:07.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.455 --rc genhtml_branch_coverage=1 00:07:07.455 --rc genhtml_function_coverage=1 00:07:07.455 --rc genhtml_legend=1 00:07:07.455 --rc geninfo_all_blocks=1 00:07:07.455 --rc geninfo_unexecuted_blocks=1 00:07:07.455 00:07:07.455 ' 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:07.455 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:07.455 17:55:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.582 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.582 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:15.582 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:15.582 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:15.582 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:15.582 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:15.582 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:15.582 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:15.582 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:15.582 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:15.582 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:15.582 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:15.582 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:15.582 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:15.582 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:15.582 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:15.583 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:15.583 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:15.583 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:15.583 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:15.583 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:15.583 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:15.583 altname enp217s0f0np0 00:07:15.583 altname ens818f0np0 00:07:15.583 inet 192.168.100.8/24 scope global mlx_0_0 00:07:15.583 valid_lft forever preferred_lft forever 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:15.583 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:15.584 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:15.584 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:15.584 altname enp217s0f1np1 00:07:15.584 altname ens818f1np1 00:07:15.584 inet 192.168.100.9/24 scope global mlx_0_1 00:07:15.584 valid_lft forever preferred_lft forever 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:15.584 192.168.100.9' 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:15.584 192.168.100.9' 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:15.584 192.168.100.9' 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2205475 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2205475 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2205475 ']' 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.584 [2024-12-09 17:55:22.651411] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:07:15.584 [2024-12-09 17:55:22.651465] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.584 [2024-12-09 17:55:22.732563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.584 [2024-12-09 17:55:22.776052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.584 [2024-12-09 17:55:22.776091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.584 [2024-12-09 17:55:22.776100] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.584 [2024-12-09 17:55:22.776109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.584 [2024-12-09 17:55:22.776132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.584 [2024-12-09 17:55:22.777975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.584 [2024-12-09 17:55:22.777992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.584 [2024-12-09 17:55:22.778014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:15.584 [2024-12-09 17:55:22.778016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.584 17:55:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.584 [2024-12-09 17:55:22.949544] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x924c80/0x929170) succeed. 00:07:15.584 [2024-12-09 17:55:22.959435] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x926310/0x96a810) succeed. 00:07:15.584 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.584 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:15.584 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.584 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.584 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:15.584 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:15.584 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:15.584 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.584 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.584 Malloc0 00:07:15.585 [2024-12-09 17:55:23.162348] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2205523 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2205523 /var/tmp/bdevperf.sock 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2205523 ']' 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:15.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:15.585 { 00:07:15.585 "params": { 00:07:15.585 "name": "Nvme$subsystem", 00:07:15.585 "trtype": "$TEST_TRANSPORT", 00:07:15.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:15.585 "adrfam": "ipv4", 00:07:15.585 "trsvcid": "$NVMF_PORT", 00:07:15.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:15.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:15.585 "hdgst": ${hdgst:-false}, 00:07:15.585 "ddgst": ${ddgst:-false} 00:07:15.585 }, 00:07:15.585 "method": "bdev_nvme_attach_controller" 00:07:15.585 } 00:07:15.585 EOF 00:07:15.585 )") 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:15.585 17:55:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:15.585 "params": { 00:07:15.585 "name": "Nvme0", 00:07:15.585 "trtype": "rdma", 00:07:15.585 "traddr": "192.168.100.8", 00:07:15.585 "adrfam": "ipv4", 00:07:15.585 "trsvcid": "4420", 00:07:15.585 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:15.585 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:15.585 "hdgst": false, 00:07:15.585 "ddgst": false 00:07:15.585 }, 00:07:15.585 "method": "bdev_nvme_attach_controller" 00:07:15.585 }' 00:07:15.585 [2024-12-09 17:55:23.264592] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:07:15.585 [2024-12-09 17:55:23.264642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2205523 ] 00:07:15.585 [2024-12-09 17:55:23.358877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.585 [2024-12-09 17:55:23.398235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.844 Running I/O for 10 seconds... 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1705 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1705 -ge 100 ']' 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.413 17:55:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:17.247 1856.00 IOPS, 116.00 MiB/s [2024-12-09T16:55:25.226Z] [2024-12-09 17:55:25.206036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bdff80 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bcff00 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bbfe80 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bafe00 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b9fd80 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b8fd00 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b7fc80 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b6fc00 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b5fb80 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b4fb00 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b3fa80 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b2fa00 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b1f980 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b0f900 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aff880 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aef800 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000adf780 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000acf700 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000abf680 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aaf600 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a9f580 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a8f500 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a7f480 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a6f400 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a5f380 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a4f300 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a3f280 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a2f200 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.247 [2024-12-09 17:55:25.206628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a1f180 len:0x10000 key:0x182100 00:07:17.247 [2024-12-09 17:55:25.206637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.206647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a0f100 len:0x10000 key:0x182100 00:07:17.248 [2024-12-09 17:55:25.206656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.206667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000df0000 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.206675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.206686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ddff80 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.206697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.206708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dcff00 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.206716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.206729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dbfe80 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.206739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.206750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dafe00 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.206758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.206769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d9fd80 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.206777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.206788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d8fd00 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.206797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.206807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d7fc80 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.206816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.206827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d6fc00 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.206836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.206846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d5fb80 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.206856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.206866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d4fb00 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.206875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.206886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d3fa80 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.206894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.206905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d2fa00 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.206913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.206926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d1f980 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.206934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.206945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d0f900 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.206959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.206969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cff880 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.206978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.206988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cef800 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.206997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.207007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cdf780 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.207016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.207026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ccf700 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.207035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.207046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000cbf680 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.207055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.207065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000caf600 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.207074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.207084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c9f580 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.207093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.207104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c8f500 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.207113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.207123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c7f480 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.207134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.207146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c6f400 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.207154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.207165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c5f380 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.207174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.207184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c4f300 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.207193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.207203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c3f280 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.207212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.207222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c2f200 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.207231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.207242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c1f180 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.207250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.207260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000c0f100 len:0x10000 key:0x182000 00:07:17.248 [2024-12-09 17:55:25.207269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.207279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ff0000 len:0x10000 key:0x181f00 00:07:17.248 [2024-12-09 17:55:25.207288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.207299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000fdff80 len:0x10000 key:0x181f00 00:07:17.248 [2024-12-09 17:55:25.207308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.207318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bf0000 len:0x10000 key:0x182100 00:07:17.248 [2024-12-09 17:55:25.207327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dff68000 sqhd:7210 p:0 m:0 dnr:0 00:07:17.248 [2024-12-09 17:55:25.210097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:17.248 task offset: 106496 on job bdev=Nvme0n1 fails 00:07:17.248 00:07:17.248 Latency(us) 00:07:17.248 [2024-12-09T16:55:25.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.249 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:17.249 Job: Nvme0n1 ended in about 1.62 seconds with error 00:07:17.249 Verification LBA range: start 0x0 length 0x400 00:07:17.249 Nvme0n1 : 1.62 1145.81 71.61 39.51 0.00 53520.17 2136.47 1026765.62 00:07:17.249 [2024-12-09T16:55:25.228Z] =================================================================================================================== 00:07:17.249 [2024-12-09T16:55:25.228Z] Total : 1145.81 71.61 39.51 0.00 53520.17 2136.47 1026765.62 00:07:17.249 17:55:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2205523 00:07:17.249 17:55:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:17.249 17:55:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:17.249 17:55:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:17.249 17:55:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:17.249 17:55:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:17.249 17:55:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:17.249 17:55:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:17.249 { 00:07:17.249 "params": { 00:07:17.249 "name": "Nvme$subsystem", 00:07:17.249 "trtype": "$TEST_TRANSPORT", 00:07:17.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:17.249 "adrfam": "ipv4", 00:07:17.249 "trsvcid": "$NVMF_PORT", 00:07:17.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:17.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:17.249 "hdgst": ${hdgst:-false}, 00:07:17.249 "ddgst": ${ddgst:-false} 00:07:17.249 }, 00:07:17.249 "method": "bdev_nvme_attach_controller" 00:07:17.249 } 00:07:17.249 EOF 00:07:17.249 )") 00:07:17.509 17:55:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:17.509 17:55:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:17.509 17:55:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:17.509 17:55:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:17.509 "params": { 00:07:17.509 "name": "Nvme0", 00:07:17.509 "trtype": "rdma", 00:07:17.509 "traddr": "192.168.100.8", 00:07:17.509 "adrfam": "ipv4", 00:07:17.509 "trsvcid": "4420", 00:07:17.509 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:17.509 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:17.509 "hdgst": false, 00:07:17.509 "ddgst": false 00:07:17.509 }, 00:07:17.509 "method": "bdev_nvme_attach_controller" 00:07:17.509 }' 00:07:17.509 [2024-12-09 17:55:25.265068] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:07:17.509 [2024-12-09 17:55:25.265114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2205878 ] 00:07:17.509 [2024-12-09 17:55:25.355987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.509 [2024-12-09 17:55:25.395106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.768 Running I/O for 1 seconds... 00:07:18.723 3075.00 IOPS, 192.19 MiB/s 00:07:18.723 Latency(us) 00:07:18.723 [2024-12-09T16:55:26.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:18.723 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:18.723 Verification LBA range: start 0x0 length 0x400 00:07:18.723 Nvme0n1 : 1.01 3115.13 194.70 0.00 0.00 20129.80 802.82 34603.01 00:07:18.723 [2024-12-09T16:55:26.702Z] =================================================================================================================== 00:07:18.723 [2024-12-09T16:55:26.702Z] Total : 3115.13 194.70 0.00 0.00 20129.80 802.82 34603.01 00:07:18.982 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2205523 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:18.982 rmmod nvme_rdma 00:07:18.982 rmmod nvme_fabrics 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2205475 ']' 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2205475 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2205475 ']' 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2205475 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2205475 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2205475' 00:07:18.982 killing process with pid 2205475 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2205475 00:07:18.982 17:55:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2205475 00:07:19.241 [2024-12-09 17:55:27.129232] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:19.241 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:19.241 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:19.241 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:19.241 00:07:19.241 real 0m12.117s 00:07:19.241 user 0m22.902s 00:07:19.241 sys 0m6.640s 00:07:19.241 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.241 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:19.241 ************************************ 00:07:19.241 END TEST nvmf_host_management 00:07:19.241 ************************************ 00:07:19.241 17:55:27 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:07:19.241 17:55:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:19.241 17:55:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.241 17:55:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:19.501 ************************************ 00:07:19.501 START TEST nvmf_lvol 00:07:19.501 ************************************ 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:07:19.501 * Looking for test storage... 00:07:19.501 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:19.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.501 --rc genhtml_branch_coverage=1 00:07:19.501 --rc genhtml_function_coverage=1 00:07:19.501 --rc genhtml_legend=1 00:07:19.501 --rc geninfo_all_blocks=1 00:07:19.501 --rc geninfo_unexecuted_blocks=1 00:07:19.501 00:07:19.501 ' 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:19.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.501 --rc genhtml_branch_coverage=1 00:07:19.501 --rc genhtml_function_coverage=1 00:07:19.501 --rc genhtml_legend=1 00:07:19.501 --rc geninfo_all_blocks=1 00:07:19.501 --rc geninfo_unexecuted_blocks=1 00:07:19.501 00:07:19.501 ' 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:19.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.501 --rc genhtml_branch_coverage=1 00:07:19.501 --rc genhtml_function_coverage=1 00:07:19.501 --rc genhtml_legend=1 00:07:19.501 --rc geninfo_all_blocks=1 00:07:19.501 --rc geninfo_unexecuted_blocks=1 00:07:19.501 00:07:19.501 ' 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:19.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.501 --rc genhtml_branch_coverage=1 00:07:19.501 --rc genhtml_function_coverage=1 00:07:19.501 --rc genhtml_legend=1 00:07:19.501 --rc geninfo_all_blocks=1 00:07:19.501 --rc geninfo_unexecuted_blocks=1 00:07:19.501 00:07:19.501 ' 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.501 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:19.502 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:19.502 17:55:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:27.687 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:27.687 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:27.687 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:27.687 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:27.687 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:27.688 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:27.688 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:27.688 altname enp217s0f0np0 00:07:27.688 altname ens818f0np0 00:07:27.688 inet 192.168.100.8/24 scope global mlx_0_0 00:07:27.688 valid_lft forever preferred_lft forever 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:27.688 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:27.688 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:27.688 altname enp217s0f1np1 00:07:27.688 altname ens818f1np1 00:07:27.688 inet 192.168.100.9/24 scope global mlx_0_1 00:07:27.688 valid_lft forever preferred_lft forever 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:27.688 192.168.100.9' 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:27.688 192.168.100.9' 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:27.688 192.168.100.9' 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2209656 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2209656 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2209656 ']' 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.688 17:55:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:27.688 [2024-12-09 17:55:34.839722] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:07:27.688 [2024-12-09 17:55:34.839777] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.688 [2024-12-09 17:55:34.931711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.688 [2024-12-09 17:55:34.971509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.688 [2024-12-09 17:55:34.971549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.688 [2024-12-09 17:55:34.971558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.688 [2024-12-09 17:55:34.971566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.688 [2024-12-09 17:55:34.971573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.688 [2024-12-09 17:55:34.972974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.688 [2024-12-09 17:55:34.973045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.688 [2024-12-09 17:55:34.973046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.949 17:55:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.949 17:55:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:27.949 17:55:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:27.949 17:55:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:27.949 17:55:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:27.949 17:55:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.949 17:55:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:27.949 [2024-12-09 17:55:35.910500] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d22dc0/0x1d272b0) succeed. 00:07:27.949 [2024-12-09 17:55:35.919491] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d243b0/0x1d68950) succeed. 00:07:28.208 17:55:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:28.467 17:55:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:28.467 17:55:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:28.726 17:55:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:28.726 17:55:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:28.994 17:55:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:28.994 17:55:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4acdf417-5d85-4d6a-ae99-52d5e5f85e0e 00:07:28.994 17:55:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4acdf417-5d85-4d6a-ae99-52d5e5f85e0e lvol 20 00:07:29.253 17:55:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ee20577a-a472-4667-9757-615fe95456de 00:07:29.253 17:55:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:29.511 17:55:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ee20577a-a472-4667-9757-615fe95456de 00:07:29.770 17:55:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:29.770 [2024-12-09 17:55:37.702644] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:29.770 17:55:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:30.030 17:55:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2210247 00:07:30.030 17:55:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:30.030 17:55:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:30.967 17:55:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ee20577a-a472-4667-9757-615fe95456de MY_SNAPSHOT 00:07:31.226 17:55:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=36828c0c-fda4-41b6-8c74-f4035630aaad 00:07:31.226 17:55:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ee20577a-a472-4667-9757-615fe95456de 30 00:07:31.485 17:55:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 36828c0c-fda4-41b6-8c74-f4035630aaad MY_CLONE 00:07:31.744 17:55:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5bb5ebbf-0a6d-4ecf-a991-11a93a65c764 00:07:31.744 17:55:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5bb5ebbf-0a6d-4ecf-a991-11a93a65c764 00:07:32.004 17:55:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2210247 00:07:41.986 Initializing NVMe Controllers 00:07:41.986 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:07:41.986 Controller IO queue size 128, less than required. 00:07:41.986 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:41.986 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:41.986 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:41.986 Initialization complete. Launching workers. 00:07:41.986 ======================================================== 00:07:41.986 Latency(us) 00:07:41.986 Device Information : IOPS MiB/s Average min max 00:07:41.986 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16538.60 64.60 7740.68 1999.80 40700.87 00:07:41.986 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16515.40 64.51 7750.68 3619.82 36719.70 00:07:41.986 ======================================================== 00:07:41.986 Total : 33054.00 129.12 7745.67 1999.80 40700.87 00:07:41.986 00:07:41.986 17:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:41.986 17:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ee20577a-a472-4667-9757-615fe95456de 00:07:41.986 17:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4acdf417-5d85-4d6a-ae99-52d5e5f85e0e 00:07:41.986 17:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:41.986 17:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:41.986 17:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:41.986 17:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:41.986 17:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:41.986 17:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:41.986 17:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:41.986 17:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:41.986 17:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:41.986 17:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:41.986 rmmod nvme_rdma 00:07:42.245 rmmod nvme_fabrics 00:07:42.245 17:55:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:42.245 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:42.245 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:42.245 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2209656 ']' 00:07:42.245 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2209656 00:07:42.245 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2209656 ']' 00:07:42.245 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2209656 00:07:42.245 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:42.245 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.245 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2209656 00:07:42.245 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.245 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.245 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2209656' 00:07:42.245 killing process with pid 2209656 00:07:42.245 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2209656 00:07:42.245 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2209656 00:07:42.505 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:42.505 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:42.505 00:07:42.505 real 0m23.112s 00:07:42.505 user 1m13.084s 00:07:42.505 sys 0m6.889s 00:07:42.505 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.505 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.505 ************************************ 00:07:42.505 END TEST nvmf_lvol 00:07:42.505 ************************************ 00:07:42.505 17:55:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:07:42.505 17:55:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.505 17:55:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.505 17:55:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.505 ************************************ 00:07:42.505 START TEST nvmf_lvs_grow 00:07:42.505 ************************************ 00:07:42.505 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:07:42.766 * Looking for test storage... 00:07:42.766 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:42.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.766 --rc genhtml_branch_coverage=1 00:07:42.766 --rc genhtml_function_coverage=1 00:07:42.766 --rc genhtml_legend=1 00:07:42.766 --rc geninfo_all_blocks=1 00:07:42.766 --rc geninfo_unexecuted_blocks=1 00:07:42.766 00:07:42.766 ' 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:42.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.766 --rc genhtml_branch_coverage=1 00:07:42.766 --rc genhtml_function_coverage=1 00:07:42.766 --rc genhtml_legend=1 00:07:42.766 --rc geninfo_all_blocks=1 00:07:42.766 --rc geninfo_unexecuted_blocks=1 00:07:42.766 00:07:42.766 ' 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:42.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.766 --rc genhtml_branch_coverage=1 00:07:42.766 --rc genhtml_function_coverage=1 00:07:42.766 --rc genhtml_legend=1 00:07:42.766 --rc geninfo_all_blocks=1 00:07:42.766 --rc geninfo_unexecuted_blocks=1 00:07:42.766 00:07:42.766 ' 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:42.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.766 --rc genhtml_branch_coverage=1 00:07:42.766 --rc genhtml_function_coverage=1 00:07:42.766 --rc genhtml_legend=1 00:07:42.766 --rc geninfo_all_blocks=1 00:07:42.766 --rc geninfo_unexecuted_blocks=1 00:07:42.766 00:07:42.766 ' 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.766 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.767 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:42.767 17:55:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:51.212 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.212 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:51.212 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:51.212 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:51.212 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:51.212 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:51.212 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:51.212 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:51.212 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:51.212 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:51.212 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:51.213 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:51.213 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:51.213 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:51.213 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:51.213 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:51.213 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:51.213 altname enp217s0f0np0 00:07:51.213 altname ens818f0np0 00:07:51.213 inet 192.168.100.8/24 scope global mlx_0_0 00:07:51.213 valid_lft forever preferred_lft forever 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:51.213 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:51.214 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:51.214 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:51.214 altname enp217s0f1np1 00:07:51.214 altname ens818f1np1 00:07:51.214 inet 192.168.100.9/24 scope global mlx_0_1 00:07:51.214 valid_lft forever preferred_lft forever 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:51.214 192.168.100.9' 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:51.214 192.168.100.9' 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:51.214 192.168.100.9' 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2215721 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2215721 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2215721 ']' 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.214 17:55:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:51.214 [2024-12-09 17:55:58.024127] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:07:51.214 [2024-12-09 17:55:58.024186] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.214 [2024-12-09 17:55:58.113850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.214 [2024-12-09 17:55:58.152333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.214 [2024-12-09 17:55:58.152375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.214 [2024-12-09 17:55:58.152384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.214 [2024-12-09 17:55:58.152392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.214 [2024-12-09 17:55:58.152399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.214 [2024-12-09 17:55:58.153044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.214 17:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.214 17:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:51.214 17:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:51.214 17:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:51.214 17:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:51.214 17:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.214 17:55:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:51.214 [2024-12-09 17:55:59.100282] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa166a0/0xa1ab90) succeed. 00:07:51.214 [2024-12-09 17:55:59.109507] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa17b50/0xa5c230) succeed. 00:07:51.214 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:51.214 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.214 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.214 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:51.474 ************************************ 00:07:51.474 START TEST lvs_grow_clean 00:07:51.474 ************************************ 00:07:51.474 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:51.474 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:51.474 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:51.474 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:51.474 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:51.474 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:51.474 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:51.474 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:51.474 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:51.474 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:51.474 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:51.474 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:51.733 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=17fb722e-5c8c-4edb-bad3-7d543fa2ec3b 00:07:51.733 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 17fb722e-5c8c-4edb-bad3-7d543fa2ec3b 00:07:51.733 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:51.992 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:51.992 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:51.992 17:55:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 17fb722e-5c8c-4edb-bad3-7d543fa2ec3b lvol 150 00:07:52.252 17:56:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4b2e4c2b-d22f-45be-ae45-73f2ab32f614 00:07:52.252 17:56:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:52.252 17:56:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:52.252 [2024-12-09 17:56:00.185607] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:52.252 [2024-12-09 17:56:00.185665] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:52.252 true 00:07:52.252 17:56:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 17fb722e-5c8c-4edb-bad3-7d543fa2ec3b 00:07:52.252 17:56:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:52.511 17:56:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:52.511 17:56:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:52.769 17:56:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4b2e4c2b-d22f-45be-ae45-73f2ab32f614 00:07:53.027 17:56:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:53.027 [2024-12-09 17:56:00.980087] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:53.027 17:56:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:53.286 17:56:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2216352 00:07:53.286 17:56:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:53.286 17:56:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:53.286 17:56:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2216352 /var/tmp/bdevperf.sock 00:07:53.286 17:56:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2216352 ']' 00:07:53.286 17:56:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:53.286 17:56:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.286 17:56:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:53.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:53.286 17:56:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.286 17:56:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:53.286 [2024-12-09 17:56:01.227252] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:07:53.286 [2024-12-09 17:56:01.227306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216352 ] 00:07:53.544 [2024-12-09 17:56:01.317701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.544 [2024-12-09 17:56:01.358489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.112 17:56:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.112 17:56:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:54.112 17:56:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:54.371 Nvme0n1 00:07:54.371 17:56:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:54.630 [ 00:07:54.630 { 00:07:54.630 "name": "Nvme0n1", 00:07:54.630 "aliases": [ 00:07:54.630 "4b2e4c2b-d22f-45be-ae45-73f2ab32f614" 00:07:54.630 ], 00:07:54.630 "product_name": "NVMe disk", 00:07:54.630 "block_size": 4096, 00:07:54.630 "num_blocks": 38912, 00:07:54.630 "uuid": "4b2e4c2b-d22f-45be-ae45-73f2ab32f614", 00:07:54.630 "numa_id": 1, 00:07:54.630 "assigned_rate_limits": { 00:07:54.630 "rw_ios_per_sec": 0, 00:07:54.630 "rw_mbytes_per_sec": 0, 00:07:54.630 "r_mbytes_per_sec": 0, 00:07:54.630 "w_mbytes_per_sec": 0 00:07:54.630 }, 00:07:54.630 "claimed": false, 00:07:54.630 "zoned": false, 00:07:54.630 "supported_io_types": { 00:07:54.630 "read": true, 00:07:54.630 "write": true, 00:07:54.630 "unmap": true, 00:07:54.630 "flush": true, 00:07:54.630 "reset": true, 00:07:54.630 "nvme_admin": true, 00:07:54.630 "nvme_io": true, 00:07:54.630 "nvme_io_md": false, 00:07:54.630 "write_zeroes": true, 00:07:54.630 "zcopy": false, 00:07:54.630 "get_zone_info": false, 00:07:54.630 "zone_management": false, 00:07:54.630 "zone_append": false, 00:07:54.630 "compare": true, 00:07:54.630 "compare_and_write": true, 00:07:54.630 "abort": true, 00:07:54.630 "seek_hole": false, 00:07:54.630 "seek_data": false, 00:07:54.630 "copy": true, 00:07:54.630 "nvme_iov_md": false 00:07:54.630 }, 00:07:54.630 "memory_domains": [ 00:07:54.630 { 00:07:54.630 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:07:54.630 "dma_device_type": 0 00:07:54.630 } 00:07:54.630 ], 00:07:54.630 "driver_specific": { 00:07:54.630 "nvme": [ 00:07:54.630 { 00:07:54.630 "trid": { 00:07:54.630 "trtype": "RDMA", 00:07:54.630 "adrfam": "IPv4", 00:07:54.630 "traddr": "192.168.100.8", 00:07:54.630 "trsvcid": "4420", 00:07:54.630 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:54.630 }, 00:07:54.630 "ctrlr_data": { 00:07:54.630 "cntlid": 1, 00:07:54.630 "vendor_id": "0x8086", 00:07:54.630 "model_number": "SPDK bdev Controller", 00:07:54.630 "serial_number": "SPDK0", 00:07:54.630 "firmware_revision": "25.01", 00:07:54.630 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:54.630 "oacs": { 00:07:54.630 "security": 0, 00:07:54.630 "format": 0, 00:07:54.630 "firmware": 0, 00:07:54.630 "ns_manage": 0 00:07:54.630 }, 00:07:54.630 "multi_ctrlr": true, 00:07:54.630 "ana_reporting": false 00:07:54.630 }, 00:07:54.630 "vs": { 00:07:54.630 "nvme_version": "1.3" 00:07:54.630 }, 00:07:54.630 "ns_data": { 00:07:54.630 "id": 1, 00:07:54.630 "can_share": true 00:07:54.630 } 00:07:54.630 } 00:07:54.630 ], 00:07:54.630 "mp_policy": "active_passive" 00:07:54.630 } 00:07:54.630 } 00:07:54.630 ] 00:07:54.630 17:56:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:54.630 17:56:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2216527 00:07:54.630 17:56:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:54.630 Running I/O for 10 seconds... 00:07:56.009 Latency(us) 00:07:56.009 [2024-12-09T16:56:03.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.010 Nvme0n1 : 1.00 34501.00 134.77 0.00 0.00 0.00 0.00 0.00 00:07:56.010 [2024-12-09T16:56:03.989Z] =================================================================================================================== 00:07:56.010 [2024-12-09T16:56:03.989Z] Total : 34501.00 134.77 0.00 0.00 0.00 0.00 0.00 00:07:56.010 00:07:56.579 17:56:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 17fb722e-5c8c-4edb-bad3-7d543fa2ec3b 00:07:56.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.838 Nvme0n1 : 2.00 34834.00 136.07 0.00 0.00 0.00 0.00 0.00 00:07:56.838 [2024-12-09T16:56:04.817Z] =================================================================================================================== 00:07:56.838 [2024-12-09T16:56:04.817Z] Total : 34834.00 136.07 0.00 0.00 0.00 0.00 0.00 00:07:56.838 00:07:56.838 true 00:07:56.838 17:56:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 17fb722e-5c8c-4edb-bad3-7d543fa2ec3b 00:07:56.838 17:56:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:57.097 17:56:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:57.098 17:56:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:57.098 17:56:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2216527 00:07:57.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.666 Nvme0n1 : 3.00 34976.33 136.63 0.00 0.00 0.00 0.00 0.00 00:07:57.666 [2024-12-09T16:56:05.645Z] =================================================================================================================== 00:07:57.666 [2024-12-09T16:56:05.645Z] Total : 34976.33 136.63 0.00 0.00 0.00 0.00 0.00 00:07:57.666 00:07:59.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.045 Nvme0n1 : 4.00 35089.00 137.07 0.00 0.00 0.00 0.00 0.00 00:07:59.045 [2024-12-09T16:56:07.024Z] =================================================================================================================== 00:07:59.045 [2024-12-09T16:56:07.024Z] Total : 35089.00 137.07 0.00 0.00 0.00 0.00 0.00 00:07:59.045 00:07:59.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.983 Nvme0n1 : 5.00 35181.40 137.43 0.00 0.00 0.00 0.00 0.00 00:07:59.983 [2024-12-09T16:56:07.962Z] =================================================================================================================== 00:07:59.983 [2024-12-09T16:56:07.962Z] Total : 35181.40 137.43 0.00 0.00 0.00 0.00 0.00 00:07:59.983 00:08:00.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.920 Nvme0n1 : 6.00 35253.33 137.71 0.00 0.00 0.00 0.00 0.00 00:08:00.920 [2024-12-09T16:56:08.899Z] =================================================================================================================== 00:08:00.920 [2024-12-09T16:56:08.899Z] Total : 35253.33 137.71 0.00 0.00 0.00 0.00 0.00 00:08:00.921 00:08:01.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.886 Nvme0n1 : 7.00 35296.00 137.88 0.00 0.00 0.00 0.00 0.00 00:08:01.886 [2024-12-09T16:56:09.865Z] =================================================================================================================== 00:08:01.886 [2024-12-09T16:56:09.865Z] Total : 35296.00 137.88 0.00 0.00 0.00 0.00 0.00 00:08:01.886 00:08:02.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.823 Nvme0n1 : 8.00 35252.50 137.71 0.00 0.00 0.00 0.00 0.00 00:08:02.823 [2024-12-09T16:56:10.802Z] =================================================================================================================== 00:08:02.823 [2024-12-09T16:56:10.802Z] Total : 35252.50 137.71 0.00 0.00 0.00 0.00 0.00 00:08:02.823 00:08:03.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.847 Nvme0n1 : 9.00 35267.44 137.76 0.00 0.00 0.00 0.00 0.00 00:08:03.847 [2024-12-09T16:56:11.826Z] =================================================================================================================== 00:08:03.847 [2024-12-09T16:56:11.826Z] Total : 35267.44 137.76 0.00 0.00 0.00 0.00 0.00 00:08:03.847 00:08:04.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.785 Nvme0n1 : 10.00 35289.30 137.85 0.00 0.00 0.00 0.00 0.00 00:08:04.785 [2024-12-09T16:56:12.764Z] =================================================================================================================== 00:08:04.785 [2024-12-09T16:56:12.764Z] Total : 35289.30 137.85 0.00 0.00 0.00 0.00 0.00 00:08:04.785 00:08:04.785 00:08:04.785 Latency(us) 00:08:04.785 [2024-12-09T16:56:12.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.785 Nvme0n1 : 10.00 35288.49 137.85 0.00 0.00 3624.22 2673.87 9437.18 00:08:04.785 [2024-12-09T16:56:12.764Z] =================================================================================================================== 00:08:04.785 [2024-12-09T16:56:12.764Z] Total : 35288.49 137.85 0.00 0.00 3624.22 2673.87 9437.18 00:08:04.785 { 00:08:04.785 "results": [ 00:08:04.785 { 00:08:04.785 "job": "Nvme0n1", 00:08:04.785 "core_mask": "0x2", 00:08:04.785 "workload": "randwrite", 00:08:04.785 "status": "finished", 00:08:04.785 "queue_depth": 128, 00:08:04.785 "io_size": 4096, 00:08:04.785 "runtime": 10.003147, 00:08:04.785 "iops": 35288.49471071454, 00:08:04.785 "mibps": 137.84568246372868, 00:08:04.785 "io_failed": 0, 00:08:04.785 "io_timeout": 0, 00:08:04.785 "avg_latency_us": 3624.221467746943, 00:08:04.785 "min_latency_us": 2673.8688, 00:08:04.785 "max_latency_us": 9437.184 00:08:04.785 } 00:08:04.785 ], 00:08:04.785 "core_count": 1 00:08:04.785 } 00:08:04.785 17:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2216352 00:08:04.785 17:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2216352 ']' 00:08:04.785 17:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2216352 00:08:04.785 17:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:04.785 17:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.785 17:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2216352 00:08:04.785 17:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:04.785 17:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:04.785 17:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2216352' 00:08:04.785 killing process with pid 2216352 00:08:04.785 17:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2216352 00:08:04.785 Received shutdown signal, test time was about 10.000000 seconds 00:08:04.785 00:08:04.785 Latency(us) 00:08:04.785 [2024-12-09T16:56:12.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.785 [2024-12-09T16:56:12.764Z] =================================================================================================================== 00:08:04.785 [2024-12-09T16:56:12.764Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:04.785 17:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2216352 00:08:05.044 17:56:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:05.303 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:05.562 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 17fb722e-5c8c-4edb-bad3-7d543fa2ec3b 00:08:05.562 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:05.562 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:05.562 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:05.562 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:05.821 [2024-12-09 17:56:13.659434] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:05.821 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 17fb722e-5c8c-4edb-bad3-7d543fa2ec3b 00:08:05.821 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:05.821 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 17fb722e-5c8c-4edb-bad3-7d543fa2ec3b 00:08:05.821 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:05.821 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.821 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:05.821 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.821 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:05.821 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.821 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:05.821 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:05.821 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 17fb722e-5c8c-4edb-bad3-7d543fa2ec3b 00:08:06.080 request: 00:08:06.080 { 00:08:06.080 "uuid": "17fb722e-5c8c-4edb-bad3-7d543fa2ec3b", 00:08:06.080 "method": "bdev_lvol_get_lvstores", 00:08:06.080 "req_id": 1 00:08:06.080 } 00:08:06.080 Got JSON-RPC error response 00:08:06.080 response: 00:08:06.080 { 00:08:06.080 "code": -19, 00:08:06.080 "message": "No such device" 00:08:06.080 } 00:08:06.080 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:06.080 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:06.080 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:06.080 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:06.080 17:56:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:06.339 aio_bdev 00:08:06.339 17:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4b2e4c2b-d22f-45be-ae45-73f2ab32f614 00:08:06.339 17:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=4b2e4c2b-d22f-45be-ae45-73f2ab32f614 00:08:06.339 17:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:06.339 17:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:06.339 17:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:06.339 17:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:06.339 17:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:06.598 17:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4b2e4c2b-d22f-45be-ae45-73f2ab32f614 -t 2000 00:08:06.599 [ 00:08:06.599 { 00:08:06.599 "name": "4b2e4c2b-d22f-45be-ae45-73f2ab32f614", 00:08:06.599 "aliases": [ 00:08:06.599 "lvs/lvol" 00:08:06.599 ], 00:08:06.599 "product_name": "Logical Volume", 00:08:06.599 "block_size": 4096, 00:08:06.599 "num_blocks": 38912, 00:08:06.599 "uuid": "4b2e4c2b-d22f-45be-ae45-73f2ab32f614", 00:08:06.599 "assigned_rate_limits": { 00:08:06.599 "rw_ios_per_sec": 0, 00:08:06.599 "rw_mbytes_per_sec": 0, 00:08:06.599 "r_mbytes_per_sec": 0, 00:08:06.599 "w_mbytes_per_sec": 0 00:08:06.599 }, 00:08:06.599 "claimed": false, 00:08:06.599 "zoned": false, 00:08:06.599 "supported_io_types": { 00:08:06.599 "read": true, 00:08:06.599 "write": true, 00:08:06.599 "unmap": true, 00:08:06.599 "flush": false, 00:08:06.599 "reset": true, 00:08:06.599 "nvme_admin": false, 00:08:06.599 "nvme_io": false, 00:08:06.599 "nvme_io_md": false, 00:08:06.599 "write_zeroes": true, 00:08:06.599 "zcopy": false, 00:08:06.599 "get_zone_info": false, 00:08:06.599 "zone_management": false, 00:08:06.599 "zone_append": false, 00:08:06.599 "compare": false, 00:08:06.599 "compare_and_write": false, 00:08:06.599 "abort": false, 00:08:06.599 "seek_hole": true, 00:08:06.599 "seek_data": true, 00:08:06.599 "copy": false, 00:08:06.599 "nvme_iov_md": false 00:08:06.599 }, 00:08:06.599 "driver_specific": { 00:08:06.599 "lvol": { 00:08:06.599 "lvol_store_uuid": "17fb722e-5c8c-4edb-bad3-7d543fa2ec3b", 00:08:06.599 "base_bdev": "aio_bdev", 00:08:06.599 "thin_provision": false, 00:08:06.599 "num_allocated_clusters": 38, 00:08:06.599 "snapshot": false, 00:08:06.599 "clone": false, 00:08:06.599 "esnap_clone": false 00:08:06.599 } 00:08:06.599 } 00:08:06.599 } 00:08:06.599 ] 00:08:06.599 17:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:06.599 17:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 17fb722e-5c8c-4edb-bad3-7d543fa2ec3b 00:08:06.599 17:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:06.858 17:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:06.858 17:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 17fb722e-5c8c-4edb-bad3-7d543fa2ec3b 00:08:06.858 17:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:07.117 17:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:07.117 17:56:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4b2e4c2b-d22f-45be-ae45-73f2ab32f614 00:08:07.117 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 17fb722e-5c8c-4edb-bad3-7d543fa2ec3b 00:08:07.375 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:07.634 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.634 00:08:07.634 real 0m16.284s 00:08:07.634 user 0m16.177s 00:08:07.634 sys 0m1.241s 00:08:07.634 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.634 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:07.634 ************************************ 00:08:07.634 END TEST lvs_grow_clean 00:08:07.634 ************************************ 00:08:07.634 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:07.634 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:07.634 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.634 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:07.634 ************************************ 00:08:07.634 START TEST lvs_grow_dirty 00:08:07.634 ************************************ 00:08:07.634 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:07.634 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:07.634 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:07.634 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:07.634 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:07.634 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:07.634 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:07.634 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.634 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.634 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:07.893 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:07.893 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:08.153 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=24a6cab7-c24b-43f6-84e8-89f147de3b18 00:08:08.153 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24a6cab7-c24b-43f6-84e8-89f147de3b18 00:08:08.153 17:56:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:08.412 17:56:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:08.412 17:56:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:08.412 17:56:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 24a6cab7-c24b-43f6-84e8-89f147de3b18 lvol 150 00:08:08.412 17:56:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=22bbf246-8440-4879-a77c-11482f0d492c 00:08:08.412 17:56:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:08.412 17:56:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:08.671 [2024-12-09 17:56:16.539944] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:08.671 [2024-12-09 17:56:16.539998] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:08.671 true 00:08:08.671 17:56:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24a6cab7-c24b-43f6-84e8-89f147de3b18 00:08:08.671 17:56:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:08.930 17:56:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:08.930 17:56:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:09.189 17:56:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 22bbf246-8440-4879-a77c-11482f0d492c 00:08:09.189 17:56:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:09.447 [2024-12-09 17:56:17.278328] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:09.447 17:56:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:09.706 17:56:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2219246 00:08:09.706 17:56:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:09.706 17:56:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2219246 /var/tmp/bdevperf.sock 00:08:09.706 17:56:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2219246 ']' 00:08:09.706 17:56:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:09.706 17:56:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:09.706 17:56:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.706 17:56:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:09.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:09.706 17:56:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.706 17:56:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.706 [2024-12-09 17:56:17.524267] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:08:09.706 [2024-12-09 17:56:17.524322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2219246 ] 00:08:09.706 [2024-12-09 17:56:17.615654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.706 [2024-12-09 17:56:17.656885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.643 17:56:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.643 17:56:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:10.643 17:56:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:10.902 Nvme0n1 00:08:10.902 17:56:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:10.902 [ 00:08:10.902 { 00:08:10.902 "name": "Nvme0n1", 00:08:10.902 "aliases": [ 00:08:10.902 "22bbf246-8440-4879-a77c-11482f0d492c" 00:08:10.902 ], 00:08:10.902 "product_name": "NVMe disk", 00:08:10.902 "block_size": 4096, 00:08:10.902 "num_blocks": 38912, 00:08:10.902 "uuid": "22bbf246-8440-4879-a77c-11482f0d492c", 00:08:10.902 "numa_id": 1, 00:08:10.902 "assigned_rate_limits": { 00:08:10.902 "rw_ios_per_sec": 0, 00:08:10.902 "rw_mbytes_per_sec": 0, 00:08:10.902 "r_mbytes_per_sec": 0, 00:08:10.902 "w_mbytes_per_sec": 0 00:08:10.902 }, 00:08:10.902 "claimed": false, 00:08:10.902 "zoned": false, 00:08:10.902 "supported_io_types": { 00:08:10.902 "read": true, 00:08:10.902 "write": true, 00:08:10.902 "unmap": true, 00:08:10.902 "flush": true, 00:08:10.902 "reset": true, 00:08:10.902 "nvme_admin": true, 00:08:10.902 "nvme_io": true, 00:08:10.902 "nvme_io_md": false, 00:08:10.902 "write_zeroes": true, 00:08:10.902 "zcopy": false, 00:08:10.902 "get_zone_info": false, 00:08:10.902 "zone_management": false, 00:08:10.902 "zone_append": false, 00:08:10.902 "compare": true, 00:08:10.902 "compare_and_write": true, 00:08:10.902 "abort": true, 00:08:10.902 "seek_hole": false, 00:08:10.902 "seek_data": false, 00:08:10.902 "copy": true, 00:08:10.902 "nvme_iov_md": false 00:08:10.902 }, 00:08:10.902 "memory_domains": [ 00:08:10.902 { 00:08:10.902 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:08:10.902 "dma_device_type": 0 00:08:10.902 } 00:08:10.902 ], 00:08:10.902 "driver_specific": { 00:08:10.902 "nvme": [ 00:08:10.902 { 00:08:10.902 "trid": { 00:08:10.902 "trtype": "RDMA", 00:08:10.902 "adrfam": "IPv4", 00:08:10.902 "traddr": "192.168.100.8", 00:08:10.902 "trsvcid": "4420", 00:08:10.902 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:10.902 }, 00:08:10.902 "ctrlr_data": { 00:08:10.902 "cntlid": 1, 00:08:10.902 "vendor_id": "0x8086", 00:08:10.902 "model_number": "SPDK bdev Controller", 00:08:10.902 "serial_number": "SPDK0", 00:08:10.902 "firmware_revision": "25.01", 00:08:10.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:10.902 "oacs": { 00:08:10.902 "security": 0, 00:08:10.902 "format": 0, 00:08:10.902 "firmware": 0, 00:08:10.902 "ns_manage": 0 00:08:10.902 }, 00:08:10.902 "multi_ctrlr": true, 00:08:10.902 "ana_reporting": false 00:08:10.902 }, 00:08:10.902 "vs": { 00:08:10.902 "nvme_version": "1.3" 00:08:10.902 }, 00:08:10.902 "ns_data": { 00:08:10.902 "id": 1, 00:08:10.902 "can_share": true 00:08:10.902 } 00:08:10.902 } 00:08:10.902 ], 00:08:10.902 "mp_policy": "active_passive" 00:08:10.902 } 00:08:10.902 } 00:08:10.902 ] 00:08:10.902 17:56:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2219514 00:08:10.902 17:56:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:10.902 17:56:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:11.161 Running I/O for 10 seconds... 00:08:12.098 Latency(us) 00:08:12.099 [2024-12-09T16:56:20.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.099 Nvme0n1 : 1.00 34562.00 135.01 0.00 0.00 0.00 0.00 0.00 00:08:12.099 [2024-12-09T16:56:20.078Z] =================================================================================================================== 00:08:12.099 [2024-12-09T16:56:20.078Z] Total : 34562.00 135.01 0.00 0.00 0.00 0.00 0.00 00:08:12.099 00:08:13.036 17:56:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 24a6cab7-c24b-43f6-84e8-89f147de3b18 00:08:13.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.036 Nvme0n1 : 2.00 34737.00 135.69 0.00 0.00 0.00 0.00 0.00 00:08:13.036 [2024-12-09T16:56:21.015Z] =================================================================================================================== 00:08:13.036 [2024-12-09T16:56:21.015Z] Total : 34737.00 135.69 0.00 0.00 0.00 0.00 0.00 00:08:13.036 00:08:13.295 true 00:08:13.295 17:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24a6cab7-c24b-43f6-84e8-89f147de3b18 00:08:13.295 17:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:13.295 17:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:13.295 17:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:13.295 17:56:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2219514 00:08:14.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.233 Nvme0n1 : 3.00 34846.00 136.12 0.00 0.00 0.00 0.00 0.00 00:08:14.233 [2024-12-09T16:56:22.212Z] =================================================================================================================== 00:08:14.233 [2024-12-09T16:56:22.212Z] Total : 34846.00 136.12 0.00 0.00 0.00 0.00 0.00 00:08:14.233 00:08:15.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.170 Nvme0n1 : 4.00 35010.00 136.76 0.00 0.00 0.00 0.00 0.00 00:08:15.170 [2024-12-09T16:56:23.149Z] =================================================================================================================== 00:08:15.170 [2024-12-09T16:56:23.149Z] Total : 35010.00 136.76 0.00 0.00 0.00 0.00 0.00 00:08:15.170 00:08:16.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.107 Nvme0n1 : 5.00 35124.20 137.20 0.00 0.00 0.00 0.00 0.00 00:08:16.107 [2024-12-09T16:56:24.086Z] =================================================================================================================== 00:08:16.107 [2024-12-09T16:56:24.086Z] Total : 35124.20 137.20 0.00 0.00 0.00 0.00 0.00 00:08:16.107 00:08:17.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.044 Nvme0n1 : 6.00 35195.67 137.48 0.00 0.00 0.00 0.00 0.00 00:08:17.044 [2024-12-09T16:56:25.023Z] =================================================================================================================== 00:08:17.044 [2024-12-09T16:56:25.023Z] Total : 35195.67 137.48 0.00 0.00 0.00 0.00 0.00 00:08:17.044 00:08:17.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.990 Nvme0n1 : 7.00 35249.29 137.69 0.00 0.00 0.00 0.00 0.00 00:08:17.990 [2024-12-09T16:56:25.969Z] =================================================================================================================== 00:08:17.990 [2024-12-09T16:56:25.969Z] Total : 35249.29 137.69 0.00 0.00 0.00 0.00 0.00 00:08:17.990 00:08:19.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.368 Nvme0n1 : 8.00 35292.88 137.86 0.00 0.00 0.00 0.00 0.00 00:08:19.368 [2024-12-09T16:56:27.347Z] =================================================================================================================== 00:08:19.368 [2024-12-09T16:56:27.347Z] Total : 35292.88 137.86 0.00 0.00 0.00 0.00 0.00 00:08:19.368 00:08:20.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.306 Nvme0n1 : 9.00 35325.22 137.99 0.00 0.00 0.00 0.00 0.00 00:08:20.306 [2024-12-09T16:56:28.285Z] =================================================================================================================== 00:08:20.306 [2024-12-09T16:56:28.285Z] Total : 35325.22 137.99 0.00 0.00 0.00 0.00 0.00 00:08:20.306 00:08:21.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.244 Nvme0n1 : 10.00 35356.20 138.11 0.00 0.00 0.00 0.00 0.00 00:08:21.244 [2024-12-09T16:56:29.223Z] =================================================================================================================== 00:08:21.244 [2024-12-09T16:56:29.223Z] Total : 35356.20 138.11 0.00 0.00 0.00 0.00 0.00 00:08:21.244 00:08:21.244 00:08:21.244 Latency(us) 00:08:21.244 [2024-12-09T16:56:29.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.244 Nvme0n1 : 10.00 35356.51 138.11 0.00 0.00 3617.03 2293.76 11639.19 00:08:21.244 [2024-12-09T16:56:29.223Z] =================================================================================================================== 00:08:21.244 [2024-12-09T16:56:29.223Z] Total : 35356.51 138.11 0.00 0.00 3617.03 2293.76 11639.19 00:08:21.244 { 00:08:21.244 "results": [ 00:08:21.244 { 00:08:21.244 "job": "Nvme0n1", 00:08:21.244 "core_mask": "0x2", 00:08:21.244 "workload": "randwrite", 00:08:21.244 "status": "finished", 00:08:21.244 "queue_depth": 128, 00:08:21.244 "io_size": 4096, 00:08:21.244 "runtime": 10.004776, 00:08:21.244 "iops": 35356.51372904301, 00:08:21.244 "mibps": 138.11138175407424, 00:08:21.244 "io_failed": 0, 00:08:21.244 "io_timeout": 0, 00:08:21.244 "avg_latency_us": 3617.028984029808, 00:08:21.244 "min_latency_us": 2293.76, 00:08:21.244 "max_latency_us": 11639.1936 00:08:21.244 } 00:08:21.244 ], 00:08:21.244 "core_count": 1 00:08:21.244 } 00:08:21.244 17:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2219246 00:08:21.244 17:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2219246 ']' 00:08:21.244 17:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2219246 00:08:21.244 17:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:21.244 17:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.244 17:56:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2219246 00:08:21.244 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:21.244 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:21.244 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2219246' 00:08:21.244 killing process with pid 2219246 00:08:21.244 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2219246 00:08:21.244 Received shutdown signal, test time was about 10.000000 seconds 00:08:21.244 00:08:21.244 Latency(us) 00:08:21.244 [2024-12-09T16:56:29.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.244 [2024-12-09T16:56:29.223Z] =================================================================================================================== 00:08:21.244 [2024-12-09T16:56:29.223Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:21.244 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2219246 00:08:21.244 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:21.503 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:21.762 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:21.762 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24a6cab7-c24b-43f6-84e8-89f147de3b18 00:08:22.022 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:22.022 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:22.022 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2215721 00:08:22.022 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2215721 00:08:22.022 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2215721 Killed "${NVMF_APP[@]}" "$@" 00:08:22.022 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:22.022 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:22.022 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:22.022 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:22.022 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.022 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2221394 00:08:22.022 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2221394 00:08:22.022 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:22.022 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2221394 ']' 00:08:22.022 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.022 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.022 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.022 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.022 17:56:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.022 [2024-12-09 17:56:29.900336] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:08:22.022 [2024-12-09 17:56:29.900390] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.022 [2024-12-09 17:56:29.991231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.281 [2024-12-09 17:56:30.037377] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.281 [2024-12-09 17:56:30.037412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.281 [2024-12-09 17:56:30.037422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.281 [2024-12-09 17:56:30.037430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.281 [2024-12-09 17:56:30.037437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.281 [2024-12-09 17:56:30.037957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.849 17:56:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.849 17:56:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:22.849 17:56:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:22.849 17:56:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:22.849 17:56:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.849 17:56:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.849 17:56:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:23.108 [2024-12-09 17:56:30.952350] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:23.108 [2024-12-09 17:56:30.952441] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:23.108 [2024-12-09 17:56:30.952467] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:23.108 17:56:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:23.108 17:56:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 22bbf246-8440-4879-a77c-11482f0d492c 00:08:23.108 17:56:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=22bbf246-8440-4879-a77c-11482f0d492c 00:08:23.108 17:56:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.108 17:56:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:23.108 17:56:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.108 17:56:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.108 17:56:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:23.367 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 22bbf246-8440-4879-a77c-11482f0d492c -t 2000 00:08:23.626 [ 00:08:23.626 { 00:08:23.626 "name": "22bbf246-8440-4879-a77c-11482f0d492c", 00:08:23.626 "aliases": [ 00:08:23.626 "lvs/lvol" 00:08:23.626 ], 00:08:23.626 "product_name": "Logical Volume", 00:08:23.626 "block_size": 4096, 00:08:23.626 "num_blocks": 38912, 00:08:23.626 "uuid": "22bbf246-8440-4879-a77c-11482f0d492c", 00:08:23.626 "assigned_rate_limits": { 00:08:23.626 "rw_ios_per_sec": 0, 00:08:23.626 "rw_mbytes_per_sec": 0, 00:08:23.626 "r_mbytes_per_sec": 0, 00:08:23.626 "w_mbytes_per_sec": 0 00:08:23.626 }, 00:08:23.627 "claimed": false, 00:08:23.627 "zoned": false, 00:08:23.627 "supported_io_types": { 00:08:23.627 "read": true, 00:08:23.627 "write": true, 00:08:23.627 "unmap": true, 00:08:23.627 "flush": false, 00:08:23.627 "reset": true, 00:08:23.627 "nvme_admin": false, 00:08:23.627 "nvme_io": false, 00:08:23.627 "nvme_io_md": false, 00:08:23.627 "write_zeroes": true, 00:08:23.627 "zcopy": false, 00:08:23.627 "get_zone_info": false, 00:08:23.627 "zone_management": false, 00:08:23.627 "zone_append": false, 00:08:23.627 "compare": false, 00:08:23.627 "compare_and_write": false, 00:08:23.627 "abort": false, 00:08:23.627 "seek_hole": true, 00:08:23.627 "seek_data": true, 00:08:23.627 "copy": false, 00:08:23.627 "nvme_iov_md": false 00:08:23.627 }, 00:08:23.627 "driver_specific": { 00:08:23.627 "lvol": { 00:08:23.627 "lvol_store_uuid": "24a6cab7-c24b-43f6-84e8-89f147de3b18", 00:08:23.627 "base_bdev": "aio_bdev", 00:08:23.627 "thin_provision": false, 00:08:23.627 "num_allocated_clusters": 38, 00:08:23.627 "snapshot": false, 00:08:23.627 "clone": false, 00:08:23.627 "esnap_clone": false 00:08:23.627 } 00:08:23.627 } 00:08:23.627 } 00:08:23.627 ] 00:08:23.627 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:23.627 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24a6cab7-c24b-43f6-84e8-89f147de3b18 00:08:23.627 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:23.627 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:23.627 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24a6cab7-c24b-43f6-84e8-89f147de3b18 00:08:23.627 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:23.886 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:23.886 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:24.145 [2024-12-09 17:56:31.925025] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:24.145 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24a6cab7-c24b-43f6-84e8-89f147de3b18 00:08:24.145 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:24.145 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24a6cab7-c24b-43f6-84e8-89f147de3b18 00:08:24.145 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:24.145 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.145 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:24.145 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.145 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:24.145 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.145 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:24.145 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:24.145 17:56:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24a6cab7-c24b-43f6-84e8-89f147de3b18 00:08:24.405 request: 00:08:24.405 { 00:08:24.405 "uuid": "24a6cab7-c24b-43f6-84e8-89f147de3b18", 00:08:24.405 "method": "bdev_lvol_get_lvstores", 00:08:24.405 "req_id": 1 00:08:24.405 } 00:08:24.405 Got JSON-RPC error response 00:08:24.405 response: 00:08:24.405 { 00:08:24.405 "code": -19, 00:08:24.405 "message": "No such device" 00:08:24.405 } 00:08:24.405 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:24.405 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:24.405 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:24.405 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:24.405 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:24.405 aio_bdev 00:08:24.405 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 22bbf246-8440-4879-a77c-11482f0d492c 00:08:24.405 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=22bbf246-8440-4879-a77c-11482f0d492c 00:08:24.405 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:24.405 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:24.405 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:24.405 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:24.405 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:24.664 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 22bbf246-8440-4879-a77c-11482f0d492c -t 2000 00:08:24.923 [ 00:08:24.923 { 00:08:24.923 "name": "22bbf246-8440-4879-a77c-11482f0d492c", 00:08:24.923 "aliases": [ 00:08:24.923 "lvs/lvol" 00:08:24.923 ], 00:08:24.923 "product_name": "Logical Volume", 00:08:24.923 "block_size": 4096, 00:08:24.923 "num_blocks": 38912, 00:08:24.923 "uuid": "22bbf246-8440-4879-a77c-11482f0d492c", 00:08:24.923 "assigned_rate_limits": { 00:08:24.923 "rw_ios_per_sec": 0, 00:08:24.923 "rw_mbytes_per_sec": 0, 00:08:24.923 "r_mbytes_per_sec": 0, 00:08:24.923 "w_mbytes_per_sec": 0 00:08:24.923 }, 00:08:24.923 "claimed": false, 00:08:24.923 "zoned": false, 00:08:24.923 "supported_io_types": { 00:08:24.923 "read": true, 00:08:24.923 "write": true, 00:08:24.923 "unmap": true, 00:08:24.923 "flush": false, 00:08:24.923 "reset": true, 00:08:24.923 "nvme_admin": false, 00:08:24.923 "nvme_io": false, 00:08:24.923 "nvme_io_md": false, 00:08:24.923 "write_zeroes": true, 00:08:24.923 "zcopy": false, 00:08:24.923 "get_zone_info": false, 00:08:24.923 "zone_management": false, 00:08:24.923 "zone_append": false, 00:08:24.923 "compare": false, 00:08:24.923 "compare_and_write": false, 00:08:24.923 "abort": false, 00:08:24.923 "seek_hole": true, 00:08:24.923 "seek_data": true, 00:08:24.923 "copy": false, 00:08:24.923 "nvme_iov_md": false 00:08:24.923 }, 00:08:24.923 "driver_specific": { 00:08:24.923 "lvol": { 00:08:24.923 "lvol_store_uuid": "24a6cab7-c24b-43f6-84e8-89f147de3b18", 00:08:24.923 "base_bdev": "aio_bdev", 00:08:24.923 "thin_provision": false, 00:08:24.924 "num_allocated_clusters": 38, 00:08:24.924 "snapshot": false, 00:08:24.924 "clone": false, 00:08:24.924 "esnap_clone": false 00:08:24.924 } 00:08:24.924 } 00:08:24.924 } 00:08:24.924 ] 00:08:24.924 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:24.924 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24a6cab7-c24b-43f6-84e8-89f147de3b18 00:08:24.924 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:24.924 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:24.924 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24a6cab7-c24b-43f6-84e8-89f147de3b18 00:08:24.924 17:56:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:25.183 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:25.183 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 22bbf246-8440-4879-a77c-11482f0d492c 00:08:25.443 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 24a6cab7-c24b-43f6-84e8-89f147de3b18 00:08:25.702 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:25.702 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.962 00:08:25.962 real 0m18.133s 00:08:25.962 user 0m47.095s 00:08:25.962 sys 0m3.297s 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:25.962 ************************************ 00:08:25.962 END TEST lvs_grow_dirty 00:08:25.962 ************************************ 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:25.962 nvmf_trace.0 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:25.962 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:25.962 rmmod nvme_rdma 00:08:25.962 rmmod nvme_fabrics 00:08:25.963 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:25.963 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:25.963 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:25.963 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2221394 ']' 00:08:25.963 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2221394 00:08:25.963 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2221394 ']' 00:08:25.963 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2221394 00:08:25.963 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:25.963 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.963 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2221394 00:08:26.223 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.223 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.223 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2221394' 00:08:26.223 killing process with pid 2221394 00:08:26.223 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2221394 00:08:26.223 17:56:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2221394 00:08:26.223 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:26.223 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:26.223 00:08:26.223 real 0m43.667s 00:08:26.223 user 1m10.069s 00:08:26.223 sys 0m10.676s 00:08:26.223 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.223 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.223 ************************************ 00:08:26.223 END TEST nvmf_lvs_grow 00:08:26.223 ************************************ 00:08:26.223 17:56:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:08:26.223 17:56:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:26.223 17:56:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.223 17:56:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:26.223 ************************************ 00:08:26.223 START TEST nvmf_bdev_io_wait 00:08:26.223 ************************************ 00:08:26.223 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:08:26.483 * Looking for test storage... 00:08:26.483 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:26.483 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:26.483 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:26.483 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:26.483 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:26.483 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.483 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:26.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.484 --rc genhtml_branch_coverage=1 00:08:26.484 --rc genhtml_function_coverage=1 00:08:26.484 --rc genhtml_legend=1 00:08:26.484 --rc geninfo_all_blocks=1 00:08:26.484 --rc geninfo_unexecuted_blocks=1 00:08:26.484 00:08:26.484 ' 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:26.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.484 --rc genhtml_branch_coverage=1 00:08:26.484 --rc genhtml_function_coverage=1 00:08:26.484 --rc genhtml_legend=1 00:08:26.484 --rc geninfo_all_blocks=1 00:08:26.484 --rc geninfo_unexecuted_blocks=1 00:08:26.484 00:08:26.484 ' 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:26.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.484 --rc genhtml_branch_coverage=1 00:08:26.484 --rc genhtml_function_coverage=1 00:08:26.484 --rc genhtml_legend=1 00:08:26.484 --rc geninfo_all_blocks=1 00:08:26.484 --rc geninfo_unexecuted_blocks=1 00:08:26.484 00:08:26.484 ' 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:26.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.484 --rc genhtml_branch_coverage=1 00:08:26.484 --rc genhtml_function_coverage=1 00:08:26.484 --rc genhtml_legend=1 00:08:26.484 --rc geninfo_all_blocks=1 00:08:26.484 --rc geninfo_unexecuted_blocks=1 00:08:26.484 00:08:26.484 ' 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.484 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.485 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:26.485 17:56:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:34.748 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:34.748 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.748 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:34.749 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:34.749 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:34.749 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:34.749 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:34.749 altname enp217s0f0np0 00:08:34.749 altname ens818f0np0 00:08:34.749 inet 192.168.100.8/24 scope global mlx_0_0 00:08:34.749 valid_lft forever preferred_lft forever 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:34.749 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:34.749 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:34.749 altname enp217s0f1np1 00:08:34.749 altname ens818f1np1 00:08:34.749 inet 192.168.100.9/24 scope global mlx_0_1 00:08:34.749 valid_lft forever preferred_lft forever 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:34.749 192.168.100.9' 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:34.749 192.168.100.9' 00:08:34.749 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:34.750 192.168.100.9' 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2225554 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2225554 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2225554 ']' 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.750 17:56:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:34.750 [2024-12-09 17:56:41.815435] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:08:34.750 [2024-12-09 17:56:41.815498] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.750 [2024-12-09 17:56:41.909790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.750 [2024-12-09 17:56:41.953873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.750 [2024-12-09 17:56:41.953913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.750 [2024-12-09 17:56:41.953923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.750 [2024-12-09 17:56:41.953931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.750 [2024-12-09 17:56:41.953938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.750 [2024-12-09 17:56:41.956427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.750 [2024-12-09 17:56:41.956467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.750 [2024-12-09 17:56:41.956591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.750 [2024-12-09 17:56:41.956591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.750 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.750 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:34.750 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:34.750 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:34.750 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:34.750 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.750 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:34.750 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.750 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:34.750 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.750 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:34.750 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.750 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.009 [2024-12-09 17:56:42.792077] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c749d0/0x1c78ec0) succeed. 00:08:35.009 [2024-12-09 17:56:42.801599] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c76060/0x1cba560) succeed. 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.009 Malloc0 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.009 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.267 [2024-12-09 17:56:42.988549] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2225721 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2225723 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:35.267 { 00:08:35.267 "params": { 00:08:35.267 "name": "Nvme$subsystem", 00:08:35.267 "trtype": "$TEST_TRANSPORT", 00:08:35.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.267 "adrfam": "ipv4", 00:08:35.267 "trsvcid": "$NVMF_PORT", 00:08:35.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.267 "hdgst": ${hdgst:-false}, 00:08:35.267 "ddgst": ${ddgst:-false} 00:08:35.267 }, 00:08:35.267 "method": "bdev_nvme_attach_controller" 00:08:35.267 } 00:08:35.267 EOF 00:08:35.267 )") 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2225725 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:35.267 { 00:08:35.267 "params": { 00:08:35.267 "name": "Nvme$subsystem", 00:08:35.267 "trtype": "$TEST_TRANSPORT", 00:08:35.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.267 "adrfam": "ipv4", 00:08:35.267 "trsvcid": "$NVMF_PORT", 00:08:35.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.267 "hdgst": ${hdgst:-false}, 00:08:35.267 "ddgst": ${ddgst:-false} 00:08:35.267 }, 00:08:35.267 "method": "bdev_nvme_attach_controller" 00:08:35.267 } 00:08:35.267 EOF 00:08:35.267 )") 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2225728 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:35.267 17:56:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:35.267 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:35.267 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:35.267 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:35.267 { 00:08:35.267 "params": { 00:08:35.267 "name": "Nvme$subsystem", 00:08:35.267 "trtype": "$TEST_TRANSPORT", 00:08:35.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.267 "adrfam": "ipv4", 00:08:35.267 "trsvcid": "$NVMF_PORT", 00:08:35.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.268 "hdgst": ${hdgst:-false}, 00:08:35.268 "ddgst": ${ddgst:-false} 00:08:35.268 }, 00:08:35.268 "method": "bdev_nvme_attach_controller" 00:08:35.268 } 00:08:35.268 EOF 00:08:35.268 )") 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:35.268 { 00:08:35.268 "params": { 00:08:35.268 "name": "Nvme$subsystem", 00:08:35.268 "trtype": "$TEST_TRANSPORT", 00:08:35.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.268 "adrfam": "ipv4", 00:08:35.268 "trsvcid": "$NVMF_PORT", 00:08:35.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.268 "hdgst": ${hdgst:-false}, 00:08:35.268 "ddgst": ${ddgst:-false} 00:08:35.268 }, 00:08:35.268 "method": "bdev_nvme_attach_controller" 00:08:35.268 } 00:08:35.268 EOF 00:08:35.268 )") 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2225721 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:35.268 "params": { 00:08:35.268 "name": "Nvme1", 00:08:35.268 "trtype": "rdma", 00:08:35.268 "traddr": "192.168.100.8", 00:08:35.268 "adrfam": "ipv4", 00:08:35.268 "trsvcid": "4420", 00:08:35.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.268 "hdgst": false, 00:08:35.268 "ddgst": false 00:08:35.268 }, 00:08:35.268 "method": "bdev_nvme_attach_controller" 00:08:35.268 }' 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:35.268 "params": { 00:08:35.268 "name": "Nvme1", 00:08:35.268 "trtype": "rdma", 00:08:35.268 "traddr": "192.168.100.8", 00:08:35.268 "adrfam": "ipv4", 00:08:35.268 "trsvcid": "4420", 00:08:35.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.268 "hdgst": false, 00:08:35.268 "ddgst": false 00:08:35.268 }, 00:08:35.268 "method": "bdev_nvme_attach_controller" 00:08:35.268 }' 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:35.268 "params": { 00:08:35.268 "name": "Nvme1", 00:08:35.268 "trtype": "rdma", 00:08:35.268 "traddr": "192.168.100.8", 00:08:35.268 "adrfam": "ipv4", 00:08:35.268 "trsvcid": "4420", 00:08:35.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.268 "hdgst": false, 00:08:35.268 "ddgst": false 00:08:35.268 }, 00:08:35.268 "method": "bdev_nvme_attach_controller" 00:08:35.268 }' 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:35.268 17:56:43 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:35.268 "params": { 00:08:35.268 "name": "Nvme1", 00:08:35.268 "trtype": "rdma", 00:08:35.268 "traddr": "192.168.100.8", 00:08:35.268 "adrfam": "ipv4", 00:08:35.268 "trsvcid": "4420", 00:08:35.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.268 "hdgst": false, 00:08:35.268 "ddgst": false 00:08:35.268 }, 00:08:35.268 "method": "bdev_nvme_attach_controller" 00:08:35.268 }' 00:08:35.268 [2024-12-09 17:56:43.040688] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:08:35.268 [2024-12-09 17:56:43.040693] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:08:35.268 [2024-12-09 17:56:43.040692] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:08:35.268 [2024-12-09 17:56:43.040744] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 17:56:43.040745] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 17:56:43.040744] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:35.268 --proc-type=auto ] 00:08:35.268 --proc-type=auto ] 00:08:35.268 [2024-12-09 17:56:43.046446] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:08:35.268 [2024-12-09 17:56:43.046495] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:35.268 [2024-12-09 17:56:43.237673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.526 [2024-12-09 17:56:43.278914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:35.526 [2024-12-09 17:56:43.328989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.526 [2024-12-09 17:56:43.370924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:35.526 [2024-12-09 17:56:43.385496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.527 [2024-12-09 17:56:43.421079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:35.527 [2024-12-09 17:56:43.485990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.785 [2024-12-09 17:56:43.533310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:35.785 Running I/O for 1 seconds... 00:08:35.785 Running I/O for 1 seconds... 00:08:35.785 Running I/O for 1 seconds... 00:08:35.785 Running I/O for 1 seconds... 00:08:36.719 16428.00 IOPS, 64.17 MiB/s [2024-12-09T16:56:44.698Z] 17570.00 IOPS, 68.63 MiB/s [2024-12-09T16:56:44.698Z] 15102.00 IOPS, 58.99 MiB/s 00:08:36.719 Latency(us) 00:08:36.719 [2024-12-09T16:56:44.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.719 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:36.719 Nvme1n1 : 1.01 16468.63 64.33 0.00 0.00 7747.54 4771.02 18035.51 00:08:36.719 [2024-12-09T16:56:44.698Z] =================================================================================================================== 00:08:36.719 [2024-12-09T16:56:44.698Z] Total : 16468.63 64.33 0.00 0.00 7747.54 4771.02 18035.51 00:08:36.719 00:08:36.719 Latency(us) 00:08:36.719 [2024-12-09T16:56:44.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.719 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:36.719 Nvme1n1 : 1.01 17614.35 68.81 0.00 0.00 7245.53 4272.95 16986.93 00:08:36.719 [2024-12-09T16:56:44.698Z] =================================================================================================================== 00:08:36.719 [2024-12-09T16:56:44.698Z] Total : 17614.35 68.81 0.00 0.00 7245.53 4272.95 16986.93 00:08:36.719 00:08:36.719 Latency(us) 00:08:36.719 [2024-12-09T16:56:44.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.719 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:36.719 Nvme1n1 : 1.01 15151.36 59.19 0.00 0.00 8423.19 5111.81 17406.36 00:08:36.719 [2024-12-09T16:56:44.698Z] =================================================================================================================== 00:08:36.719 [2024-12-09T16:56:44.698Z] Total : 15151.36 59.19 0.00 0.00 8423.19 5111.81 17406.36 00:08:36.719 254536.00 IOPS, 994.28 MiB/s 00:08:36.719 Latency(us) 00:08:36.719 [2024-12-09T16:56:44.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.719 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:36.719 Nvme1n1 : 1.00 254158.44 992.81 0.00 0.00 500.54 219.55 2018.51 00:08:36.719 [2024-12-09T16:56:44.698Z] =================================================================================================================== 00:08:36.719 [2024-12-09T16:56:44.698Z] Total : 254158.44 992.81 0.00 0.00 500.54 219.55 2018.51 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2225723 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2225725 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2225728 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:36.978 rmmod nvme_rdma 00:08:36.978 rmmod nvme_fabrics 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2225554 ']' 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2225554 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2225554 ']' 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2225554 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.978 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2225554 00:08:37.237 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:37.237 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:37.237 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2225554' 00:08:37.237 killing process with pid 2225554 00:08:37.237 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2225554 00:08:37.237 17:56:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2225554 00:08:37.237 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:37.237 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:37.237 00:08:37.237 real 0m11.012s 00:08:37.237 user 0m20.327s 00:08:37.237 sys 0m7.078s 00:08:37.237 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.237 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.237 ************************************ 00:08:37.237 END TEST nvmf_bdev_io_wait 00:08:37.237 ************************************ 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.502 ************************************ 00:08:37.502 START TEST nvmf_queue_depth 00:08:37.502 ************************************ 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:08:37.502 * Looking for test storage... 00:08:37.502 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.502 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:37.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.762 --rc genhtml_branch_coverage=1 00:08:37.762 --rc genhtml_function_coverage=1 00:08:37.762 --rc genhtml_legend=1 00:08:37.762 --rc geninfo_all_blocks=1 00:08:37.762 --rc geninfo_unexecuted_blocks=1 00:08:37.762 00:08:37.762 ' 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:37.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.762 --rc genhtml_branch_coverage=1 00:08:37.762 --rc genhtml_function_coverage=1 00:08:37.762 --rc genhtml_legend=1 00:08:37.762 --rc geninfo_all_blocks=1 00:08:37.762 --rc geninfo_unexecuted_blocks=1 00:08:37.762 00:08:37.762 ' 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:37.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.762 --rc genhtml_branch_coverage=1 00:08:37.762 --rc genhtml_function_coverage=1 00:08:37.762 --rc genhtml_legend=1 00:08:37.762 --rc geninfo_all_blocks=1 00:08:37.762 --rc geninfo_unexecuted_blocks=1 00:08:37.762 00:08:37.762 ' 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:37.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.762 --rc genhtml_branch_coverage=1 00:08:37.762 --rc genhtml_function_coverage=1 00:08:37.762 --rc genhtml_legend=1 00:08:37.762 --rc geninfo_all_blocks=1 00:08:37.762 --rc geninfo_unexecuted_blocks=1 00:08:37.762 00:08:37.762 ' 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.762 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.763 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.763 17:56:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:45.890 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:45.890 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:45.890 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:45.890 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:45.890 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:45.891 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:45.891 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:45.891 altname enp217s0f0np0 00:08:45.891 altname ens818f0np0 00:08:45.891 inet 192.168.100.8/24 scope global mlx_0_0 00:08:45.891 valid_lft forever preferred_lft forever 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:45.891 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:45.891 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:45.891 altname enp217s0f1np1 00:08:45.891 altname ens818f1np1 00:08:45.891 inet 192.168.100.9/24 scope global mlx_0_1 00:08:45.891 valid_lft forever preferred_lft forever 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:45.891 192.168.100.9' 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:45.891 192.168.100.9' 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:45.891 192.168.100.9' 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2229584 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2229584 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2229584 ']' 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.891 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.892 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.892 17:56:52 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.892 [2024-12-09 17:56:52.848199] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:08:45.892 [2024-12-09 17:56:52.848257] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.892 [2024-12-09 17:56:52.943417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.892 [2024-12-09 17:56:52.982858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.892 [2024-12-09 17:56:52.982897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.892 [2024-12-09 17:56:52.982906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.892 [2024-12-09 17:56:52.982915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.892 [2024-12-09 17:56:52.982922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.892 [2024-12-09 17:56:52.983534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.892 [2024-12-09 17:56:53.754655] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f3d9a0/0x1f41e90) succeed. 00:08:45.892 [2024-12-09 17:56:53.763720] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f3ee50/0x1f83530) succeed. 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.892 Malloc0 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.892 [2024-12-09 17:56:53.857741] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2229744 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2229744 /var/tmp/bdevperf.sock 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2229744 ']' 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:45.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:45.892 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.151 17:56:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:46.151 [2024-12-09 17:56:53.911004] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:08:46.151 [2024-12-09 17:56:53.911049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229744 ] 00:08:46.151 [2024-12-09 17:56:54.002185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.151 [2024-12-09 17:56:54.040791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.088 17:56:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.088 17:56:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:47.088 17:56:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:47.088 17:56:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.088 17:56:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.088 NVMe0n1 00:08:47.088 17:56:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.088 17:56:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:47.088 Running I/O for 10 seconds... 00:08:48.960 17069.00 IOPS, 66.68 MiB/s [2024-12-09T16:56:58.317Z] 17408.00 IOPS, 68.00 MiB/s [2024-12-09T16:56:59.253Z] 17501.67 IOPS, 68.37 MiB/s [2024-12-09T16:57:00.188Z] 17633.00 IOPS, 68.88 MiB/s [2024-12-09T16:57:01.124Z] 17612.80 IOPS, 68.80 MiB/s [2024-12-09T16:57:02.060Z] 17624.17 IOPS, 68.84 MiB/s [2024-12-09T16:57:02.996Z] 17696.71 IOPS, 69.13 MiB/s [2024-12-09T16:57:04.374Z] 17664.00 IOPS, 69.00 MiB/s [2024-12-09T16:57:05.310Z] 17703.78 IOPS, 69.16 MiB/s [2024-12-09T16:57:05.310Z] 17715.20 IOPS, 69.20 MiB/s 00:08:57.331 Latency(us) 00:08:57.331 [2024-12-09T16:57:05.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.331 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:57.331 Verification LBA range: start 0x0 length 0x4000 00:08:57.331 NVMe0n1 : 10.05 17727.90 69.25 0.00 0.00 57615.34 22439.53 37539.02 00:08:57.331 [2024-12-09T16:57:05.310Z] =================================================================================================================== 00:08:57.331 [2024-12-09T16:57:05.310Z] Total : 17727.90 69.25 0.00 0.00 57615.34 22439.53 37539.02 00:08:57.331 { 00:08:57.331 "results": [ 00:08:57.331 { 00:08:57.331 "job": "NVMe0n1", 00:08:57.331 "core_mask": "0x1", 00:08:57.331 "workload": "verify", 00:08:57.331 "status": "finished", 00:08:57.331 "verify_range": { 00:08:57.331 "start": 0, 00:08:57.331 "length": 16384 00:08:57.331 }, 00:08:57.331 "queue_depth": 1024, 00:08:57.331 "io_size": 4096, 00:08:57.331 "runtime": 10.049018, 00:08:57.331 "iops": 17727.901373049586, 00:08:57.331 "mibps": 69.24961473847495, 00:08:57.331 "io_failed": 0, 00:08:57.331 "io_timeout": 0, 00:08:57.331 "avg_latency_us": 57615.33937606932, 00:08:57.331 "min_latency_us": 22439.5264, 00:08:57.331 "max_latency_us": 37539.0208 00:08:57.331 } 00:08:57.331 ], 00:08:57.331 "core_count": 1 00:08:57.331 } 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2229744 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2229744 ']' 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2229744 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2229744 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2229744' 00:08:57.331 killing process with pid 2229744 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2229744 00:08:57.331 Received shutdown signal, test time was about 10.000000 seconds 00:08:57.331 00:08:57.331 Latency(us) 00:08:57.331 [2024-12-09T16:57:05.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.331 [2024-12-09T16:57:05.310Z] =================================================================================================================== 00:08:57.331 [2024-12-09T16:57:05.310Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2229744 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:57.331 rmmod nvme_rdma 00:08:57.331 rmmod nvme_fabrics 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2229584 ']' 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2229584 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2229584 ']' 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2229584 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.331 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2229584 00:08:57.589 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:57.589 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:57.590 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2229584' 00:08:57.590 killing process with pid 2229584 00:08:57.590 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2229584 00:08:57.590 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2229584 00:08:57.848 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:57.848 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:57.848 00:08:57.848 real 0m20.290s 00:08:57.848 user 0m26.486s 00:08:57.848 sys 0m6.343s 00:08:57.848 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.849 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.849 ************************************ 00:08:57.849 END TEST nvmf_queue_depth 00:08:57.849 ************************************ 00:08:57.849 17:57:05 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:08:57.849 17:57:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:57.849 17:57:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.849 17:57:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.849 ************************************ 00:08:57.849 START TEST nvmf_target_multipath 00:08:57.849 ************************************ 00:08:57.849 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:08:57.849 * Looking for test storage... 00:08:57.849 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:57.849 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:57.849 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:57.849 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.108 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:58.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.108 --rc genhtml_branch_coverage=1 00:08:58.109 --rc genhtml_function_coverage=1 00:08:58.109 --rc genhtml_legend=1 00:08:58.109 --rc geninfo_all_blocks=1 00:08:58.109 --rc geninfo_unexecuted_blocks=1 00:08:58.109 00:08:58.109 ' 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:58.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.109 --rc genhtml_branch_coverage=1 00:08:58.109 --rc genhtml_function_coverage=1 00:08:58.109 --rc genhtml_legend=1 00:08:58.109 --rc geninfo_all_blocks=1 00:08:58.109 --rc geninfo_unexecuted_blocks=1 00:08:58.109 00:08:58.109 ' 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:58.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.109 --rc genhtml_branch_coverage=1 00:08:58.109 --rc genhtml_function_coverage=1 00:08:58.109 --rc genhtml_legend=1 00:08:58.109 --rc geninfo_all_blocks=1 00:08:58.109 --rc geninfo_unexecuted_blocks=1 00:08:58.109 00:08:58.109 ' 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:58.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.109 --rc genhtml_branch_coverage=1 00:08:58.109 --rc genhtml_function_coverage=1 00:08:58.109 --rc genhtml_legend=1 00:08:58.109 --rc geninfo_all_blocks=1 00:08:58.109 --rc geninfo_unexecuted_blocks=1 00:08:58.109 00:08:58.109 ' 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.109 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:58.109 17:57:05 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:06.240 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:06.240 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:06.240 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:06.240 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:06.240 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:06.241 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:06.241 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:06.241 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:06.241 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:06.241 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:06.241 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:06.241 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:06.241 17:57:12 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:06.241 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:06.241 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:06.241 altname enp217s0f0np0 00:09:06.241 altname ens818f0np0 00:09:06.241 inet 192.168.100.8/24 scope global mlx_0_0 00:09:06.241 valid_lft forever preferred_lft forever 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:06.241 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:06.241 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:06.241 altname enp217s0f1np1 00:09:06.241 altname ens818f1np1 00:09:06.241 inet 192.168.100.9/24 scope global mlx_0_1 00:09:06.241 valid_lft forever preferred_lft forever 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:06.241 192.168.100.9' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:06.241 192.168.100.9' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:06.241 192.168.100.9' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:09:06.241 run this test only with TCP transport for now 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:06.241 rmmod nvme_rdma 00:09:06.241 rmmod nvme_fabrics 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:06.241 00:09:06.241 real 0m7.566s 00:09:06.241 user 0m2.139s 00:09:06.241 sys 0m5.632s 00:09:06.241 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:06.242 ************************************ 00:09:06.242 END TEST nvmf_target_multipath 00:09:06.242 ************************************ 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.242 ************************************ 00:09:06.242 START TEST nvmf_zcopy 00:09:06.242 ************************************ 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:09:06.242 * Looking for test storage... 00:09:06.242 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:06.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.242 --rc genhtml_branch_coverage=1 00:09:06.242 --rc genhtml_function_coverage=1 00:09:06.242 --rc genhtml_legend=1 00:09:06.242 --rc geninfo_all_blocks=1 00:09:06.242 --rc geninfo_unexecuted_blocks=1 00:09:06.242 00:09:06.242 ' 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:06.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.242 --rc genhtml_branch_coverage=1 00:09:06.242 --rc genhtml_function_coverage=1 00:09:06.242 --rc genhtml_legend=1 00:09:06.242 --rc geninfo_all_blocks=1 00:09:06.242 --rc geninfo_unexecuted_blocks=1 00:09:06.242 00:09:06.242 ' 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:06.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.242 --rc genhtml_branch_coverage=1 00:09:06.242 --rc genhtml_function_coverage=1 00:09:06.242 --rc genhtml_legend=1 00:09:06.242 --rc geninfo_all_blocks=1 00:09:06.242 --rc geninfo_unexecuted_blocks=1 00:09:06.242 00:09:06.242 ' 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:06.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.242 --rc genhtml_branch_coverage=1 00:09:06.242 --rc genhtml_function_coverage=1 00:09:06.242 --rc genhtml_legend=1 00:09:06.242 --rc geninfo_all_blocks=1 00:09:06.242 --rc geninfo_unexecuted_blocks=1 00:09:06.242 00:09:06.242 ' 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.242 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.243 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:06.243 17:57:13 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:12.813 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:12.814 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:12.814 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:12.814 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:12.814 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:12.814 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:12.814 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:12.814 altname enp217s0f0np0 00:09:12.814 altname ens818f0np0 00:09:12.814 inet 192.168.100.8/24 scope global mlx_0_0 00:09:12.814 valid_lft forever preferred_lft forever 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:12.814 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:12.814 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:12.814 altname enp217s0f1np1 00:09:12.814 altname ens818f1np1 00:09:12.814 inet 192.168.100.9/24 scope global mlx_0_1 00:09:12.814 valid_lft forever preferred_lft forever 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:12.814 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:12.815 192.168.100.9' 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:12.815 192.168.100.9' 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:12.815 192.168.100.9' 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.815 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.119 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2239026 00:09:13.119 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:13.119 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2239026 00:09:13.119 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2239026 ']' 00:09:13.119 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.119 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.119 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.119 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.119 17:57:20 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.119 [2024-12-09 17:57:20.844650] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:09:13.119 [2024-12-09 17:57:20.844706] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.119 [2024-12-09 17:57:20.939665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.119 [2024-12-09 17:57:20.978484] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.119 [2024-12-09 17:57:20.978521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.119 [2024-12-09 17:57:20.978530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.119 [2024-12-09 17:57:20.978539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.119 [2024-12-09 17:57:20.978562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.119 [2024-12-09 17:57:20.979168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.707 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.707 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:13.707 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:13.707 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:13.707 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:09:13.966 Unsupported transport: rdma 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # type=--id 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@813 -- # id=0 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:13.966 nvmf_trace.0 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@827 -- # return 0 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:13.966 rmmod nvme_rdma 00:09:13.966 rmmod nvme_fabrics 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:13.966 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2239026 ']' 00:09:13.967 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2239026 00:09:13.967 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2239026 ']' 00:09:13.967 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2239026 00:09:13.967 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:13.967 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.967 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2239026 00:09:13.967 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:13.967 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:13.967 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2239026' 00:09:13.967 killing process with pid 2239026 00:09:13.967 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2239026 00:09:13.967 17:57:21 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2239026 00:09:14.226 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:14.226 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:14.226 00:09:14.226 real 0m8.752s 00:09:14.226 user 0m3.588s 00:09:14.226 sys 0m5.980s 00:09:14.226 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.226 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.226 ************************************ 00:09:14.226 END TEST nvmf_zcopy 00:09:14.226 ************************************ 00:09:14.226 17:57:22 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:09:14.226 17:57:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:14.226 17:57:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.226 17:57:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.226 ************************************ 00:09:14.226 START TEST nvmf_nmic 00:09:14.226 ************************************ 00:09:14.226 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:09:14.486 * Looking for test storage... 00:09:14.486 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.486 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:14.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.487 --rc genhtml_branch_coverage=1 00:09:14.487 --rc genhtml_function_coverage=1 00:09:14.487 --rc genhtml_legend=1 00:09:14.487 --rc geninfo_all_blocks=1 00:09:14.487 --rc geninfo_unexecuted_blocks=1 00:09:14.487 00:09:14.487 ' 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:14.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.487 --rc genhtml_branch_coverage=1 00:09:14.487 --rc genhtml_function_coverage=1 00:09:14.487 --rc genhtml_legend=1 00:09:14.487 --rc geninfo_all_blocks=1 00:09:14.487 --rc geninfo_unexecuted_blocks=1 00:09:14.487 00:09:14.487 ' 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:14.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.487 --rc genhtml_branch_coverage=1 00:09:14.487 --rc genhtml_function_coverage=1 00:09:14.487 --rc genhtml_legend=1 00:09:14.487 --rc geninfo_all_blocks=1 00:09:14.487 --rc geninfo_unexecuted_blocks=1 00:09:14.487 00:09:14.487 ' 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:14.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.487 --rc genhtml_branch_coverage=1 00:09:14.487 --rc genhtml_function_coverage=1 00:09:14.487 --rc genhtml_legend=1 00:09:14.487 --rc geninfo_all_blocks=1 00:09:14.487 --rc geninfo_unexecuted_blocks=1 00:09:14.487 00:09:14.487 ' 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.487 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:14.487 17:57:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.611 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.611 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:22.611 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:22.611 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:22.611 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:22.611 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:22.611 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:22.611 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:22.611 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:22.611 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:22.611 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:22.612 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:22.612 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:22.612 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:22.612 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:22.612 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:22.612 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:22.612 altname enp217s0f0np0 00:09:22.612 altname ens818f0np0 00:09:22.612 inet 192.168.100.8/24 scope global mlx_0_0 00:09:22.612 valid_lft forever preferred_lft forever 00:09:22.612 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:22.613 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:22.613 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:22.613 altname enp217s0f1np1 00:09:22.613 altname ens818f1np1 00:09:22.613 inet 192.168.100.9/24 scope global mlx_0_1 00:09:22.613 valid_lft forever preferred_lft forever 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:22.613 192.168.100.9' 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:22.613 192.168.100.9' 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:22.613 192.168.100.9' 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2242741 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2242741 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2242741 ']' 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.613 17:57:29 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.613 [2024-12-09 17:57:29.738313] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:09:22.613 [2024-12-09 17:57:29.738373] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.613 [2024-12-09 17:57:29.828744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.613 [2024-12-09 17:57:29.870120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.613 [2024-12-09 17:57:29.870162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.613 [2024-12-09 17:57:29.870171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.613 [2024-12-09 17:57:29.870179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.613 [2024-12-09 17:57:29.870186] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.613 [2024-12-09 17:57:29.871811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.613 [2024-12-09 17:57:29.871922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.613 [2024-12-09 17:57:29.872035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.613 [2024-12-09 17:57:29.872035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.613 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.613 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:22.872 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:22.872 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:22.872 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.872 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.872 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:22.872 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.872 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.872 [2024-12-09 17:57:30.662197] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17c5980/0x17c9e70) succeed. 00:09:22.872 [2024-12-09 17:57:30.671458] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17c7010/0x180b510) succeed. 00:09:22.872 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.872 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:22.873 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.873 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.873 Malloc0 00:09:22.873 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.873 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:22.873 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.873 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.873 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.873 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:22.873 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.873 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.132 [2024-12-09 17:57:30.861134] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:23.132 test case1: single bdev can't be used in multiple subsystems 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.132 [2024-12-09 17:57:30.884904] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:23.132 [2024-12-09 17:57:30.884926] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:23.132 [2024-12-09 17:57:30.884935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.132 request: 00:09:23.132 { 00:09:23.132 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:23.132 "namespace": { 00:09:23.132 "bdev_name": "Malloc0", 00:09:23.132 "no_auto_visible": false, 00:09:23.132 "hide_metadata": false 00:09:23.132 }, 00:09:23.132 "method": "nvmf_subsystem_add_ns", 00:09:23.132 "req_id": 1 00:09:23.132 } 00:09:23.132 Got JSON-RPC error response 00:09:23.132 response: 00:09:23.132 { 00:09:23.132 "code": -32602, 00:09:23.132 "message": "Invalid parameters" 00:09:23.132 } 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:23.132 Adding namespace failed - expected result. 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:23.132 test case2: host connect to nvmf target in multiple paths 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.132 [2024-12-09 17:57:30.900980] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.132 17:57:30 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:24.067 17:57:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:09:25.004 17:57:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:25.004 17:57:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:25.004 17:57:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:25.004 17:57:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:25.004 17:57:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:27.536 17:57:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:27.536 17:57:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:27.536 17:57:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:27.536 17:57:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:27.536 17:57:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:27.536 17:57:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:27.536 17:57:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:27.536 [global] 00:09:27.536 thread=1 00:09:27.536 invalidate=1 00:09:27.536 rw=write 00:09:27.536 time_based=1 00:09:27.536 runtime=1 00:09:27.536 ioengine=libaio 00:09:27.536 direct=1 00:09:27.536 bs=4096 00:09:27.536 iodepth=1 00:09:27.536 norandommap=0 00:09:27.536 numjobs=1 00:09:27.536 00:09:27.536 verify_dump=1 00:09:27.536 verify_backlog=512 00:09:27.536 verify_state_save=0 00:09:27.536 do_verify=1 00:09:27.536 verify=crc32c-intel 00:09:27.536 [job0] 00:09:27.536 filename=/dev/nvme0n1 00:09:27.536 Could not set queue depth (nvme0n1) 00:09:27.536 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.536 fio-3.35 00:09:27.536 Starting 1 thread 00:09:28.471 00:09:28.471 job0: (groupid=0, jobs=1): err= 0: pid=2243880: Mon Dec 9 17:57:36 2024 00:09:28.471 read: IOPS=6989, BW=27.3MiB/s (28.6MB/s)(27.3MiB/1001msec) 00:09:28.471 slat (nsec): min=8332, max=29232, avg=8834.12, stdev=762.92 00:09:28.471 clat (nsec): min=43574, max=79342, avg=58916.35, stdev=3222.14 00:09:28.471 lat (nsec): min=59289, max=88644, avg=67750.47, stdev=3253.29 00:09:28.471 clat percentiles (nsec): 00:09:28.471 | 1.00th=[52480], 5.00th=[54016], 10.00th=[55040], 20.00th=[56064], 00:09:28.471 | 30.00th=[57088], 40.00th=[58112], 50.00th=[58624], 60.00th=[59648], 00:09:28.471 | 70.00th=[60672], 80.00th=[61696], 90.00th=[63232], 95.00th=[64256], 00:09:28.471 | 99.00th=[67072], 99.50th=[68096], 99.90th=[72192], 99.95th=[75264], 00:09:28.471 | 99.99th=[79360] 00:09:28.471 write: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:09:28.471 slat (nsec): min=8410, max=36088, avg=11198.19, stdev=1132.98 00:09:28.471 clat (usec): min=31, max=1526, avg=57.12, stdev=18.21 00:09:28.471 lat (usec): min=56, max=1536, avg=68.31, stdev=18.24 00:09:28.471 clat percentiles (usec): 00:09:28.471 | 1.00th=[ 51], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 55], 00:09:28.471 | 30.00th=[ 56], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:09:28.471 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 62], 95.00th=[ 63], 00:09:28.471 | 99.00th=[ 66], 99.50th=[ 68], 99.90th=[ 76], 99.95th=[ 137], 00:09:28.471 | 99.99th=[ 1532] 00:09:28.471 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:09:28.471 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:09:28.471 lat (usec) : 50=0.32%, 100=99.64%, 250=0.02%, 500=0.01% 00:09:28.471 lat (msec) : 2=0.01% 00:09:28.472 cpu : usr=10.20%, sys=19.40%, ctx=14164, majf=0, minf=1 00:09:28.472 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.472 issued rwts: total=6996,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.472 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.472 00:09:28.472 Run status group 0 (all jobs): 00:09:28.472 READ: bw=27.3MiB/s (28.6MB/s), 27.3MiB/s-27.3MiB/s (28.6MB/s-28.6MB/s), io=27.3MiB (28.7MB), run=1001-1001msec 00:09:28.472 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:09:28.472 00:09:28.472 Disk stats (read/write): 00:09:28.472 nvme0n1: ios=6194/6589, merge=0/0, ticks=336/316, in_queue=652, util=90.58% 00:09:28.472 17:57:36 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:30.374 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:30.374 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:30.374 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:30.374 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.374 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:30.374 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.374 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:30.374 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:30.633 rmmod nvme_rdma 00:09:30.633 rmmod nvme_fabrics 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2242741 ']' 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2242741 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2242741 ']' 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2242741 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2242741 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2242741' 00:09:30.633 killing process with pid 2242741 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2242741 00:09:30.633 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2242741 00:09:30.892 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:30.892 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:30.892 00:09:30.892 real 0m16.597s 00:09:30.892 user 0m45.988s 00:09:30.892 sys 0m6.642s 00:09:30.892 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.892 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.892 ************************************ 00:09:30.892 END TEST nvmf_nmic 00:09:30.892 ************************************ 00:09:30.892 17:57:38 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:09:30.892 17:57:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:30.892 17:57:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.892 17:57:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.892 ************************************ 00:09:30.892 START TEST nvmf_fio_target 00:09:30.892 ************************************ 00:09:30.892 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:09:31.151 * Looking for test storage... 00:09:31.152 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:31.152 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:31.152 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:31.152 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:31.152 17:57:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:31.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.152 --rc genhtml_branch_coverage=1 00:09:31.152 --rc genhtml_function_coverage=1 00:09:31.152 --rc genhtml_legend=1 00:09:31.152 --rc geninfo_all_blocks=1 00:09:31.152 --rc geninfo_unexecuted_blocks=1 00:09:31.152 00:09:31.152 ' 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:31.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.152 --rc genhtml_branch_coverage=1 00:09:31.152 --rc genhtml_function_coverage=1 00:09:31.152 --rc genhtml_legend=1 00:09:31.152 --rc geninfo_all_blocks=1 00:09:31.152 --rc geninfo_unexecuted_blocks=1 00:09:31.152 00:09:31.152 ' 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:31.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.152 --rc genhtml_branch_coverage=1 00:09:31.152 --rc genhtml_function_coverage=1 00:09:31.152 --rc genhtml_legend=1 00:09:31.152 --rc geninfo_all_blocks=1 00:09:31.152 --rc geninfo_unexecuted_blocks=1 00:09:31.152 00:09:31.152 ' 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:31.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.152 --rc genhtml_branch_coverage=1 00:09:31.152 --rc genhtml_function_coverage=1 00:09:31.152 --rc genhtml_legend=1 00:09:31.152 --rc geninfo_all_blocks=1 00:09:31.152 --rc geninfo_unexecuted_blocks=1 00:09:31.152 00:09:31.152 ' 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:31.152 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:31.152 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:31.153 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:31.153 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:31.153 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.153 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:31.153 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:31.153 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:31.153 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.153 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.153 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.153 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:31.153 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:31.153 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:31.153 17:57:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:39.275 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:39.275 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:39.275 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.275 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:39.276 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:39.276 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:39.276 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:39.276 altname enp217s0f0np0 00:09:39.276 altname ens818f0np0 00:09:39.276 inet 192.168.100.8/24 scope global mlx_0_0 00:09:39.276 valid_lft forever preferred_lft forever 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:39.276 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:39.276 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:39.276 altname enp217s0f1np1 00:09:39.276 altname ens818f1np1 00:09:39.276 inet 192.168.100.9/24 scope global mlx_0_1 00:09:39.276 valid_lft forever preferred_lft forever 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:39.276 192.168.100.9' 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:39.276 192.168.100.9' 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:39.276 192.168.100.9' 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:39.276 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:39.277 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:39.277 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:39.277 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:39.277 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:39.277 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.277 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2247838 00:09:39.277 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:39.277 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2247838 00:09:39.277 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2247838 ']' 00:09:39.277 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.277 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.277 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.277 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.277 17:57:46 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.277 [2024-12-09 17:57:46.356763] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:09:39.277 [2024-12-09 17:57:46.356815] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.277 [2024-12-09 17:57:46.449061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.277 [2024-12-09 17:57:46.490718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.277 [2024-12-09 17:57:46.490758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.277 [2024-12-09 17:57:46.490767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.277 [2024-12-09 17:57:46.490775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.277 [2024-12-09 17:57:46.490798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.277 [2024-12-09 17:57:46.492457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.277 [2024-12-09 17:57:46.492484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.277 [2024-12-09 17:57:46.492614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.277 [2024-12-09 17:57:46.492615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.277 17:57:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.277 17:57:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:39.277 17:57:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:39.277 17:57:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:39.277 17:57:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.277 17:57:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.277 17:57:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:39.536 [2024-12-09 17:57:47.425648] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa97980/0xa9be70) succeed. 00:09:39.536 [2024-12-09 17:57:47.434919] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa99010/0xadd510) succeed. 00:09:39.794 17:57:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:40.053 17:57:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:40.053 17:57:47 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:40.053 17:57:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:40.312 17:57:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:40.312 17:57:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:40.312 17:57:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:40.570 17:57:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:40.570 17:57:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:40.829 17:57:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:41.088 17:57:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:41.088 17:57:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:41.088 17:57:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:41.088 17:57:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:41.347 17:57:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:41.347 17:57:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:41.605 17:57:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:41.864 17:57:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:41.864 17:57:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:42.122 17:57:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:42.122 17:57:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:42.122 17:57:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:42.381 [2024-12-09 17:57:50.228133] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:42.381 17:57:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:42.639 17:57:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:42.898 17:57:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:43.835 17:57:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:43.835 17:57:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:43.835 17:57:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:43.835 17:57:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:43.835 17:57:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:43.835 17:57:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:45.741 17:57:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:45.741 17:57:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:45.741 17:57:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:45.741 17:57:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:45.741 17:57:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:45.741 17:57:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:45.741 17:57:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:45.741 [global] 00:09:45.741 thread=1 00:09:45.741 invalidate=1 00:09:45.741 rw=write 00:09:45.741 time_based=1 00:09:45.741 runtime=1 00:09:45.741 ioengine=libaio 00:09:45.741 direct=1 00:09:45.741 bs=4096 00:09:45.741 iodepth=1 00:09:45.741 norandommap=0 00:09:45.741 numjobs=1 00:09:45.741 00:09:45.741 verify_dump=1 00:09:45.741 verify_backlog=512 00:09:45.741 verify_state_save=0 00:09:45.741 do_verify=1 00:09:45.741 verify=crc32c-intel 00:09:45.741 [job0] 00:09:45.741 filename=/dev/nvme0n1 00:09:45.741 [job1] 00:09:45.741 filename=/dev/nvme0n2 00:09:45.741 [job2] 00:09:45.741 filename=/dev/nvme0n3 00:09:45.741 [job3] 00:09:45.741 filename=/dev/nvme0n4 00:09:46.031 Could not set queue depth (nvme0n1) 00:09:46.031 Could not set queue depth (nvme0n2) 00:09:46.031 Could not set queue depth (nvme0n3) 00:09:46.031 Could not set queue depth (nvme0n4) 00:09:46.294 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.294 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.294 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.294 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.294 fio-3.35 00:09:46.294 Starting 4 threads 00:09:47.684 00:09:47.684 job0: (groupid=0, jobs=1): err= 0: pid=2249386: Mon Dec 9 17:57:55 2024 00:09:47.684 read: IOPS=3351, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1001msec) 00:09:47.684 slat (nsec): min=8340, max=24747, avg=8937.89, stdev=998.83 00:09:47.684 clat (usec): min=70, max=205, avg=137.61, stdev=19.36 00:09:47.684 lat (usec): min=78, max=214, avg=146.55, stdev=19.28 00:09:47.684 clat percentiles (usec): 00:09:47.684 | 1.00th=[ 84], 5.00th=[ 104], 10.00th=[ 112], 20.00th=[ 120], 00:09:47.684 | 30.00th=[ 128], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:09:47.684 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 163], 00:09:47.684 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 194], 99.95th=[ 204], 00:09:47.684 | 99.99th=[ 206] 00:09:47.684 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:47.684 slat (nsec): min=10077, max=43835, avg=11117.24, stdev=1123.36 00:09:47.684 clat (usec): min=66, max=242, avg=126.30, stdev=22.41 00:09:47.684 lat (usec): min=76, max=253, avg=137.42, stdev=22.33 00:09:47.684 clat percentiles (usec): 00:09:47.684 | 1.00th=[ 74], 5.00th=[ 90], 10.00th=[ 97], 20.00th=[ 106], 00:09:47.684 | 30.00th=[ 113], 40.00th=[ 119], 50.00th=[ 131], 60.00th=[ 137], 00:09:47.684 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 157], 00:09:47.684 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 180], 99.95th=[ 239], 00:09:47.684 | 99.99th=[ 243] 00:09:47.684 bw ( KiB/s): min=16384, max=16384, per=25.29%, avg=16384.00, stdev= 0.00, samples=1 00:09:47.684 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:47.684 lat (usec) : 100=8.37%, 250=91.63% 00:09:47.684 cpu : usr=4.50%, sys=10.10%, ctx=6939, majf=0, minf=1 00:09:47.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.684 issued rwts: total=3355,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.684 job1: (groupid=0, jobs=1): err= 0: pid=2249398: Mon Dec 9 17:57:55 2024 00:09:47.684 read: IOPS=3349, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1001msec) 00:09:47.684 slat (nsec): min=8400, max=20417, avg=9006.29, stdev=791.00 00:09:47.684 clat (usec): min=70, max=202, avg=137.56, stdev=19.31 00:09:47.684 lat (usec): min=79, max=211, avg=146.57, stdev=19.33 00:09:47.684 clat percentiles (usec): 00:09:47.684 | 1.00th=[ 83], 5.00th=[ 104], 10.00th=[ 112], 20.00th=[ 120], 00:09:47.684 | 30.00th=[ 128], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:09:47.684 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 157], 95.00th=[ 163], 00:09:47.684 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 196], 99.95th=[ 202], 00:09:47.684 | 99.99th=[ 202] 00:09:47.684 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:47.684 slat (nsec): min=10200, max=42992, avg=11138.16, stdev=1193.37 00:09:47.684 clat (usec): min=65, max=242, avg=126.34, stdev=22.45 00:09:47.684 lat (usec): min=75, max=252, avg=137.48, stdev=22.32 00:09:47.684 clat percentiles (usec): 00:09:47.684 | 1.00th=[ 76], 5.00th=[ 90], 10.00th=[ 97], 20.00th=[ 106], 00:09:47.684 | 30.00th=[ 113], 40.00th=[ 119], 50.00th=[ 131], 60.00th=[ 137], 00:09:47.684 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 159], 00:09:47.684 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 184], 99.95th=[ 233], 00:09:47.684 | 99.99th=[ 243] 00:09:47.684 bw ( KiB/s): min=16384, max=16384, per=25.29%, avg=16384.00, stdev= 0.00, samples=1 00:09:47.684 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:47.684 lat (usec) : 100=8.58%, 250=91.42% 00:09:47.684 cpu : usr=5.10%, sys=9.50%, ctx=6937, majf=0, minf=2 00:09:47.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.684 issued rwts: total=3353,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.684 job2: (groupid=0, jobs=1): err= 0: pid=2249410: Mon Dec 9 17:57:55 2024 00:09:47.684 read: IOPS=3604, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1001msec) 00:09:47.684 slat (nsec): min=4175, max=29124, avg=8685.31, stdev=1841.73 00:09:47.684 clat (usec): min=73, max=240, avg=122.25, stdev=31.14 00:09:47.684 lat (usec): min=78, max=249, avg=130.93, stdev=32.09 00:09:47.684 clat percentiles (usec): 00:09:47.684 | 1.00th=[ 82], 5.00th=[ 86], 10.00th=[ 89], 20.00th=[ 93], 00:09:47.684 | 30.00th=[ 96], 40.00th=[ 100], 50.00th=[ 110], 60.00th=[ 139], 00:09:47.684 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 182], 00:09:47.684 | 99.00th=[ 196], 99.50th=[ 202], 99.90th=[ 210], 99.95th=[ 237], 00:09:47.684 | 99.99th=[ 241] 00:09:47.684 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:09:47.684 slat (nsec): min=4627, max=49240, avg=10540.41, stdev=2940.76 00:09:47.684 clat (usec): min=69, max=242, avg=113.92, stdev=29.92 00:09:47.684 lat (usec): min=75, max=247, avg=124.46, stdev=31.47 00:09:47.684 clat percentiles (usec): 00:09:47.684 | 1.00th=[ 78], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 87], 00:09:47.684 | 30.00th=[ 90], 40.00th=[ 93], 50.00th=[ 98], 60.00th=[ 133], 00:09:47.684 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 153], 95.00th=[ 159], 00:09:47.684 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 202], 99.95th=[ 204], 00:09:47.684 | 99.99th=[ 243] 00:09:47.684 bw ( KiB/s): min=16384, max=16384, per=25.29%, avg=16384.00, stdev= 0.00, samples=1 00:09:47.684 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:47.684 lat (usec) : 100=47.05%, 250=52.95% 00:09:47.684 cpu : usr=4.70%, sys=9.90%, ctx=7704, majf=0, minf=1 00:09:47.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.684 issued rwts: total=3608,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.684 job3: (groupid=0, jobs=1): err= 0: pid=2249417: Mon Dec 9 17:57:55 2024 00:09:47.684 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:09:47.684 slat (nsec): min=8569, max=28124, avg=9230.61, stdev=807.35 00:09:47.684 clat (usec): min=73, max=203, avg=94.83, stdev=20.42 00:09:47.684 lat (usec): min=82, max=213, avg=104.06, stdev=20.45 00:09:47.684 clat percentiles (usec): 00:09:47.684 | 1.00th=[ 79], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 85], 00:09:47.684 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 91], 00:09:47.684 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 133], 95.00th=[ 149], 00:09:47.684 | 99.00th=[ 178], 99.50th=[ 192], 99.90th=[ 200], 99.95th=[ 204], 00:09:47.684 | 99.99th=[ 204] 00:09:47.684 write: IOPS=4941, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1001msec); 0 zone resets 00:09:47.684 slat (nsec): min=10416, max=40375, avg=11645.77, stdev=1184.05 00:09:47.684 clat (usec): min=70, max=208, avg=88.66, stdev=17.47 00:09:47.684 lat (usec): min=81, max=220, avg=100.30, stdev=17.50 00:09:47.684 clat percentiles (usec): 00:09:47.684 | 1.00th=[ 75], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 81], 00:09:47.684 | 30.00th=[ 82], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 86], 00:09:47.684 | 70.00th=[ 88], 80.00th=[ 90], 90.00th=[ 96], 95.00th=[ 141], 00:09:47.684 | 99.00th=[ 157], 99.50th=[ 180], 99.90th=[ 202], 99.95th=[ 202], 00:09:47.684 | 99.99th=[ 208] 00:09:47.684 bw ( KiB/s): min=20480, max=20480, per=31.62%, avg=20480.00, stdev= 0.00, samples=1 00:09:47.684 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:09:47.684 lat (usec) : 100=89.45%, 250=10.55% 00:09:47.684 cpu : usr=7.40%, sys=13.20%, ctx=9554, majf=0, minf=1 00:09:47.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.684 issued rwts: total=4608,4946,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.684 00:09:47.684 Run status group 0 (all jobs): 00:09:47.684 READ: bw=58.2MiB/s (61.1MB/s), 13.1MiB/s-18.0MiB/s (13.7MB/s-18.9MB/s), io=58.3MiB (61.1MB), run=1001-1001msec 00:09:47.684 WRITE: bw=63.3MiB/s (66.3MB/s), 14.0MiB/s-19.3MiB/s (14.7MB/s-20.2MB/s), io=63.3MiB (66.4MB), run=1001-1001msec 00:09:47.684 00:09:47.684 Disk stats (read/write): 00:09:47.684 nvme0n1: ios=2846/3072, merge=0/0, ticks=366/352, in_queue=718, util=84.25% 00:09:47.684 nvme0n2: ios=2796/3072, merge=0/0, ticks=360/354, in_queue=714, util=85.20% 00:09:47.684 nvme0n3: ios=3027/3072, merge=0/0, ticks=360/334, in_queue=694, util=88.45% 00:09:47.684 nvme0n4: ios=4096/4288, merge=0/0, ticks=321/316, in_queue=637, util=89.50% 00:09:47.684 17:57:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:47.684 [global] 00:09:47.684 thread=1 00:09:47.684 invalidate=1 00:09:47.684 rw=randwrite 00:09:47.684 time_based=1 00:09:47.684 runtime=1 00:09:47.684 ioengine=libaio 00:09:47.684 direct=1 00:09:47.684 bs=4096 00:09:47.684 iodepth=1 00:09:47.684 norandommap=0 00:09:47.684 numjobs=1 00:09:47.684 00:09:47.684 verify_dump=1 00:09:47.684 verify_backlog=512 00:09:47.684 verify_state_save=0 00:09:47.684 do_verify=1 00:09:47.684 verify=crc32c-intel 00:09:47.684 [job0] 00:09:47.684 filename=/dev/nvme0n1 00:09:47.684 [job1] 00:09:47.685 filename=/dev/nvme0n2 00:09:47.685 [job2] 00:09:47.685 filename=/dev/nvme0n3 00:09:47.685 [job3] 00:09:47.685 filename=/dev/nvme0n4 00:09:47.685 Could not set queue depth (nvme0n1) 00:09:47.685 Could not set queue depth (nvme0n2) 00:09:47.685 Could not set queue depth (nvme0n3) 00:09:47.685 Could not set queue depth (nvme0n4) 00:09:47.944 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.944 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.944 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.944 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.944 fio-3.35 00:09:47.944 Starting 4 threads 00:09:49.326 00:09:49.326 job0: (groupid=0, jobs=1): err= 0: pid=2249822: Mon Dec 9 17:57:56 2024 00:09:49.326 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:09:49.326 slat (nsec): min=8323, max=30112, avg=8975.76, stdev=840.96 00:09:49.326 clat (usec): min=65, max=196, avg=128.71, stdev=11.63 00:09:49.326 lat (usec): min=75, max=205, avg=137.69, stdev=11.61 00:09:49.326 clat percentiles (usec): 00:09:49.326 | 1.00th=[ 94], 5.00th=[ 112], 10.00th=[ 116], 20.00th=[ 122], 00:09:49.326 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 133], 00:09:49.326 | 70.00th=[ 135], 80.00th=[ 137], 90.00th=[ 141], 95.00th=[ 145], 00:09:49.326 | 99.00th=[ 172], 99.50th=[ 184], 99.90th=[ 192], 99.95th=[ 194], 00:09:49.326 | 99.99th=[ 198] 00:09:49.326 write: IOPS=3747, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1001msec); 0 zone resets 00:09:49.326 slat (nsec): min=8835, max=56471, avg=10986.41, stdev=1222.07 00:09:49.326 clat (usec): min=66, max=203, avg=119.46, stdev=11.71 00:09:49.326 lat (usec): min=78, max=214, avg=130.44, stdev=11.69 00:09:49.326 clat percentiles (usec): 00:09:49.326 | 1.00th=[ 84], 5.00th=[ 102], 10.00th=[ 106], 20.00th=[ 113], 00:09:49.326 | 30.00th=[ 116], 40.00th=[ 119], 50.00th=[ 121], 60.00th=[ 123], 00:09:49.326 | 70.00th=[ 125], 80.00th=[ 127], 90.00th=[ 131], 95.00th=[ 135], 00:09:49.326 | 99.00th=[ 163], 99.50th=[ 172], 99.90th=[ 182], 99.95th=[ 184], 00:09:49.326 | 99.99th=[ 204] 00:09:49.326 bw ( KiB/s): min=16384, max=16384, per=22.46%, avg=16384.00, stdev= 0.00, samples=1 00:09:49.326 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:49.326 lat (usec) : 100=2.60%, 250=97.40% 00:09:49.326 cpu : usr=6.20%, sys=9.10%, ctx=7335, majf=0, minf=1 00:09:49.326 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.326 issued rwts: total=3584,3751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.326 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.326 job1: (groupid=0, jobs=1): err= 0: pid=2249832: Mon Dec 9 17:57:56 2024 00:09:49.326 read: IOPS=5264, BW=20.6MiB/s (21.6MB/s)(20.6MiB/1001msec) 00:09:49.326 slat (nsec): min=8404, max=19741, avg=8992.21, stdev=731.92 00:09:49.326 clat (usec): min=66, max=169, avg=81.20, stdev= 5.48 00:09:49.326 lat (usec): min=74, max=178, avg=90.19, stdev= 5.56 00:09:49.326 clat percentiles (usec): 00:09:49.326 | 1.00th=[ 72], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 77], 00:09:49.326 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 82], 00:09:49.326 | 70.00th=[ 84], 80.00th=[ 86], 90.00th=[ 88], 95.00th=[ 91], 00:09:49.326 | 99.00th=[ 97], 99.50th=[ 99], 99.90th=[ 106], 99.95th=[ 111], 00:09:49.326 | 99.99th=[ 169] 00:09:49.326 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:09:49.326 slat (nsec): min=10285, max=39067, avg=11101.77, stdev=1034.67 00:09:49.326 clat (usec): min=60, max=105, avg=77.11, stdev= 5.28 00:09:49.326 lat (usec): min=73, max=121, avg=88.21, stdev= 5.38 00:09:49.326 clat percentiles (usec): 00:09:49.326 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 74], 00:09:49.326 | 30.00th=[ 75], 40.00th=[ 76], 50.00th=[ 77], 60.00th=[ 78], 00:09:49.326 | 70.00th=[ 80], 80.00th=[ 82], 90.00th=[ 84], 95.00th=[ 87], 00:09:49.326 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 102], 99.95th=[ 103], 00:09:49.326 | 99.99th=[ 106] 00:09:49.326 bw ( KiB/s): min=23088, max=23088, per=31.65%, avg=23088.00, stdev= 0.00, samples=1 00:09:49.326 iops : min= 5772, max= 5772, avg=5772.00, stdev= 0.00, samples=1 00:09:49.326 lat (usec) : 100=99.72%, 250=0.28% 00:09:49.326 cpu : usr=7.80%, sys=15.30%, ctx=10902, majf=0, minf=1 00:09:49.326 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.326 issued rwts: total=5270,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.326 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.326 job2: (groupid=0, jobs=1): err= 0: pid=2249852: Mon Dec 9 17:57:56 2024 00:09:49.326 read: IOPS=4914, BW=19.2MiB/s (20.1MB/s)(19.2MiB/1001msec) 00:09:49.326 slat (nsec): min=8637, max=31777, avg=9240.03, stdev=795.39 00:09:49.326 clat (usec): min=65, max=117, avg=88.85, stdev= 5.85 00:09:49.326 lat (usec): min=82, max=127, avg=98.09, stdev= 5.89 00:09:49.326 clat percentiles (usec): 00:09:49.326 | 1.00th=[ 79], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 84], 00:09:49.326 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 90], 00:09:49.326 | 70.00th=[ 92], 80.00th=[ 94], 90.00th=[ 97], 95.00th=[ 100], 00:09:49.326 | 99.00th=[ 106], 99.50th=[ 109], 99.90th=[ 113], 99.95th=[ 115], 00:09:49.326 | 99.99th=[ 118] 00:09:49.326 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:09:49.326 slat (nsec): min=10092, max=43401, avg=11284.08, stdev=1040.93 00:09:49.326 clat (usec): min=69, max=115, avg=84.81, stdev= 5.84 00:09:49.326 lat (usec): min=80, max=153, avg=96.09, stdev= 5.97 00:09:49.326 clat percentiles (usec): 00:09:49.326 | 1.00th=[ 75], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 81], 00:09:49.326 | 30.00th=[ 82], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 86], 00:09:49.326 | 70.00th=[ 88], 80.00th=[ 90], 90.00th=[ 93], 95.00th=[ 96], 00:09:49.326 | 99.00th=[ 102], 99.50th=[ 104], 99.90th=[ 111], 99.95th=[ 112], 00:09:49.326 | 99.99th=[ 117] 00:09:49.326 bw ( KiB/s): min=20480, max=20480, per=28.08%, avg=20480.00, stdev= 0.00, samples=1 00:09:49.326 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:09:49.326 lat (usec) : 100=96.90%, 250=3.10% 00:09:49.326 cpu : usr=8.40%, sys=13.00%, ctx=10039, majf=0, minf=1 00:09:49.326 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.326 issued rwts: total=4919,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.326 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.326 job3: (groupid=0, jobs=1): err= 0: pid=2249861: Mon Dec 9 17:57:56 2024 00:09:49.326 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:09:49.326 slat (nsec): min=8641, max=23101, avg=9196.26, stdev=872.07 00:09:49.326 clat (usec): min=88, max=180, avg=128.47, stdev=10.30 00:09:49.326 lat (usec): min=97, max=188, avg=137.67, stdev=10.28 00:09:49.326 clat percentiles (usec): 00:09:49.326 | 1.00th=[ 101], 5.00th=[ 113], 10.00th=[ 118], 20.00th=[ 122], 00:09:49.326 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 131], 00:09:49.326 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 141], 95.00th=[ 145], 00:09:49.326 | 99.00th=[ 163], 99.50th=[ 172], 99.90th=[ 178], 99.95th=[ 180], 00:09:49.326 | 99.99th=[ 180] 00:09:49.326 write: IOPS=3746, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1001msec); 0 zone resets 00:09:49.326 slat (nsec): min=7277, max=39267, avg=11118.40, stdev=1274.82 00:09:49.326 clat (usec): min=74, max=182, avg=119.40, stdev=10.17 00:09:49.326 lat (usec): min=84, max=193, avg=130.52, stdev=10.19 00:09:49.326 clat percentiles (usec): 00:09:49.326 | 1.00th=[ 92], 5.00th=[ 104], 10.00th=[ 108], 20.00th=[ 113], 00:09:49.326 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 122], 00:09:49.326 | 70.00th=[ 124], 80.00th=[ 127], 90.00th=[ 131], 95.00th=[ 135], 00:09:49.326 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 169], 99.95th=[ 178], 00:09:49.326 | 99.99th=[ 182] 00:09:49.326 bw ( KiB/s): min=16384, max=16384, per=22.46%, avg=16384.00, stdev= 0.00, samples=1 00:09:49.326 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:49.326 lat (usec) : 100=1.80%, 250=98.20% 00:09:49.326 cpu : usr=5.80%, sys=9.70%, ctx=7334, majf=0, minf=1 00:09:49.326 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.326 issued rwts: total=3584,3750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.326 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.326 00:09:49.326 Run status group 0 (all jobs): 00:09:49.326 READ: bw=67.7MiB/s (71.0MB/s), 14.0MiB/s-20.6MiB/s (14.7MB/s-21.6MB/s), io=67.8MiB (71.1MB), run=1001-1001msec 00:09:49.327 WRITE: bw=71.2MiB/s (74.7MB/s), 14.6MiB/s-22.0MiB/s (15.3MB/s-23.0MB/s), io=71.3MiB (74.8MB), run=1001-1001msec 00:09:49.327 00:09:49.327 Disk stats (read/write): 00:09:49.327 nvme0n1: ios=3092/3072, merge=0/0, ticks=382/337, in_queue=719, util=84.17% 00:09:49.327 nvme0n2: ios=4495/4608, merge=0/0, ticks=339/320, in_queue=659, util=85.20% 00:09:49.327 nvme0n3: ios=4096/4282, merge=0/0, ticks=343/319, in_queue=662, util=88.36% 00:09:49.327 nvme0n4: ios=3042/3072, merge=0/0, ticks=348/338, in_queue=686, util=89.40% 00:09:49.327 17:57:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:49.327 [global] 00:09:49.327 thread=1 00:09:49.327 invalidate=1 00:09:49.327 rw=write 00:09:49.327 time_based=1 00:09:49.327 runtime=1 00:09:49.327 ioengine=libaio 00:09:49.327 direct=1 00:09:49.327 bs=4096 00:09:49.327 iodepth=128 00:09:49.327 norandommap=0 00:09:49.327 numjobs=1 00:09:49.327 00:09:49.327 verify_dump=1 00:09:49.327 verify_backlog=512 00:09:49.327 verify_state_save=0 00:09:49.327 do_verify=1 00:09:49.327 verify=crc32c-intel 00:09:49.327 [job0] 00:09:49.327 filename=/dev/nvme0n1 00:09:49.327 [job1] 00:09:49.327 filename=/dev/nvme0n2 00:09:49.327 [job2] 00:09:49.327 filename=/dev/nvme0n3 00:09:49.327 [job3] 00:09:49.327 filename=/dev/nvme0n4 00:09:49.327 Could not set queue depth (nvme0n1) 00:09:49.327 Could not set queue depth (nvme0n2) 00:09:49.327 Could not set queue depth (nvme0n3) 00:09:49.327 Could not set queue depth (nvme0n4) 00:09:49.583 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.583 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.583 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.583 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.583 fio-3.35 00:09:49.583 Starting 4 threads 00:09:51.004 00:09:51.004 job0: (groupid=0, jobs=1): err= 0: pid=2250274: Mon Dec 9 17:57:58 2024 00:09:51.004 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:09:51.004 slat (usec): min=2, max=2650, avg=193.34, stdev=442.68 00:09:51.004 clat (usec): min=17722, max=30225, avg=24948.16, stdev=930.73 00:09:51.004 lat (usec): min=17725, max=30871, avg=25141.50, stdev=854.23 00:09:51.004 clat percentiles (usec): 00:09:51.004 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23987], 20.00th=[24249], 00:09:51.004 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:09:51.004 | 70.00th=[25297], 80.00th=[25560], 90.00th=[25822], 95.00th=[26084], 00:09:51.004 | 99.00th=[26608], 99.50th=[30016], 99.90th=[30278], 99.95th=[30278], 00:09:51.004 | 99.99th=[30278] 00:09:51.004 write: IOPS=2610, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1005msec); 0 zone resets 00:09:51.004 slat (usec): min=2, max=3239, avg=187.35, stdev=426.91 00:09:51.004 clat (usec): min=4382, max=26837, avg=23965.76, stdev=2197.83 00:09:51.004 lat (usec): min=5222, max=26862, avg=24153.11, stdev=2171.24 00:09:51.004 clat percentiles (usec): 00:09:51.004 | 1.00th=[10683], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:09:51.004 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:09:51.004 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25297], 00:09:51.004 | 99.00th=[25822], 99.50th=[26084], 99.90th=[26608], 99.95th=[26608], 00:09:51.004 | 99.99th=[26870] 00:09:51.004 bw ( KiB/s): min= 8680, max=11800, per=11.58%, avg=10240.00, stdev=2206.17, samples=2 00:09:51.004 iops : min= 2170, max= 2950, avg=2560.00, stdev=551.54, samples=2 00:09:51.004 lat (msec) : 10=0.33%, 20=1.39%, 50=98.28% 00:09:51.004 cpu : usr=1.29%, sys=3.98%, ctx=1129, majf=0, minf=1 00:09:51.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:51.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:51.004 issued rwts: total=2560,2624,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:51.004 job1: (groupid=0, jobs=1): err= 0: pid=2250286: Mon Dec 9 17:57:58 2024 00:09:51.004 read: IOPS=11.8k, BW=45.9MiB/s (48.1MB/s)(46.0MiB/1002msec) 00:09:51.004 slat (usec): min=2, max=1323, avg=41.50, stdev=152.01 00:09:51.004 clat (usec): min=4277, max=6749, avg=5534.61, stdev=239.67 00:09:51.004 lat (usec): min=4584, max=6756, avg=5576.11, stdev=203.88 00:09:51.004 clat percentiles (usec): 00:09:51.004 | 1.00th=[ 4752], 5.00th=[ 5080], 10.00th=[ 5276], 20.00th=[ 5407], 00:09:51.004 | 30.00th=[ 5473], 40.00th=[ 5538], 50.00th=[ 5538], 60.00th=[ 5604], 00:09:51.004 | 70.00th=[ 5669], 80.00th=[ 5669], 90.00th=[ 5735], 95.00th=[ 5800], 00:09:51.004 | 99.00th=[ 6259], 99.50th=[ 6390], 99.90th=[ 6587], 99.95th=[ 6652], 00:09:51.004 | 99.99th=[ 6718] 00:09:51.004 write: IOPS=11.8k, BW=46.3MiB/s (48.5MB/s)(46.4MiB/1002msec); 0 zone resets 00:09:51.004 slat (usec): min=2, max=1185, avg=39.20, stdev=140.44 00:09:51.004 clat (usec): min=697, max=6419, avg=5202.75, stdev=319.29 00:09:51.004 lat (usec): min=1436, max=6431, avg=5241.95, stdev=296.23 00:09:51.004 clat percentiles (usec): 00:09:51.004 | 1.00th=[ 4359], 5.00th=[ 4686], 10.00th=[ 4948], 20.00th=[ 5080], 00:09:51.004 | 30.00th=[ 5145], 40.00th=[ 5211], 50.00th=[ 5276], 60.00th=[ 5276], 00:09:51.004 | 70.00th=[ 5342], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5538], 00:09:51.004 | 99.00th=[ 5866], 99.50th=[ 5932], 99.90th=[ 6128], 99.95th=[ 6259], 00:09:51.004 | 99.99th=[ 6390] 00:09:51.004 bw ( KiB/s): min=48744, max=48744, per=55.14%, avg=48744.00, stdev= 0.00, samples=1 00:09:51.004 iops : min=12186, max=12186, avg=12186.00, stdev= 0.00, samples=1 00:09:51.004 lat (usec) : 750=0.01% 00:09:51.004 lat (msec) : 2=0.07%, 4=0.20%, 10=99.73% 00:09:51.004 cpu : usr=5.69%, sys=10.59%, ctx=1486, majf=0, minf=2 00:09:51.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:09:51.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:51.004 issued rwts: total=11776,11868,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:51.004 job2: (groupid=0, jobs=1): err= 0: pid=2250304: Mon Dec 9 17:57:58 2024 00:09:51.004 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:09:51.004 slat (usec): min=2, max=2644, avg=193.61, stdev=450.46 00:09:51.004 clat (usec): min=17580, max=30756, avg=24954.48, stdev=930.47 00:09:51.004 lat (usec): min=17583, max=30764, avg=25148.10, stdev=861.62 00:09:51.004 clat percentiles (usec): 00:09:51.004 | 1.00th=[22676], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:09:51.004 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:09:51.004 | 70.00th=[25297], 80.00th=[25560], 90.00th=[25822], 95.00th=[26084], 00:09:51.004 | 99.00th=[27395], 99.50th=[29230], 99.90th=[30802], 99.95th=[30802], 00:09:51.004 | 99.99th=[30802] 00:09:51.004 write: IOPS=2606, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1005msec); 0 zone resets 00:09:51.004 slat (usec): min=2, max=2567, avg=187.23, stdev=412.59 00:09:51.004 clat (usec): min=4251, max=26124, avg=23993.85, stdev=2134.99 00:09:51.004 lat (usec): min=5106, max=26753, avg=24181.08, stdev=2111.06 00:09:51.004 clat percentiles (usec): 00:09:51.004 | 1.00th=[11338], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:09:51.004 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:09:51.004 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25297], 00:09:51.004 | 99.00th=[25822], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:09:51.004 | 99.99th=[26084] 00:09:51.004 bw ( KiB/s): min= 8712, max=11768, per=11.58%, avg=10240.00, stdev=2160.92, samples=2 00:09:51.004 iops : min= 2178, max= 2942, avg=2560.00, stdev=540.23, samples=2 00:09:51.004 lat (msec) : 10=0.37%, 20=1.24%, 50=98.40% 00:09:51.004 cpu : usr=2.09%, sys=3.29%, ctx=1130, majf=0, minf=1 00:09:51.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:51.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:51.004 issued rwts: total=2560,2620,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:51.004 job3: (groupid=0, jobs=1): err= 0: pid=2250312: Mon Dec 9 17:57:58 2024 00:09:51.004 read: IOPS=4970, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1006msec) 00:09:51.004 slat (usec): min=2, max=3081, avg=98.53, stdev=362.96 00:09:51.004 clat (usec): min=2963, max=16424, avg=12738.63, stdev=838.36 00:09:51.004 lat (usec): min=5243, max=18545, avg=12837.15, stdev=888.33 00:09:51.004 clat percentiles (usec): 00:09:51.004 | 1.00th=[ 8848], 5.00th=[12125], 10.00th=[12256], 20.00th=[12518], 00:09:51.004 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12780], 00:09:51.004 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13173], 95.00th=[13960], 00:09:51.004 | 99.00th=[15008], 99.50th=[15401], 99.90th=[16319], 99.95th=[16450], 00:09:51.004 | 99.99th=[16450] 00:09:51.004 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:09:51.004 slat (usec): min=2, max=2506, avg=94.83, stdev=335.08 00:09:51.004 clat (usec): min=7608, max=15359, avg=12422.37, stdev=575.88 00:09:51.004 lat (usec): min=7618, max=15372, avg=12517.19, stdev=644.41 00:09:51.004 clat percentiles (usec): 00:09:51.004 | 1.00th=[11207], 5.00th=[11731], 10.00th=[11863], 20.00th=[12125], 00:09:51.004 | 30.00th=[12256], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:09:51.004 | 70.00th=[12649], 80.00th=[12649], 90.00th=[12911], 95.00th=[13304], 00:09:51.004 | 99.00th=[14222], 99.50th=[14484], 99.90th=[14877], 99.95th=[15008], 00:09:51.004 | 99.99th=[15401] 00:09:51.004 bw ( KiB/s): min=20480, max=20480, per=23.17%, avg=20480.00, stdev= 0.00, samples=2 00:09:51.004 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:09:51.004 lat (msec) : 4=0.01%, 10=0.87%, 20=99.12% 00:09:51.004 cpu : usr=3.38%, sys=4.78%, ctx=805, majf=0, minf=1 00:09:51.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:51.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:51.004 issued rwts: total=5000,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:51.004 00:09:51.004 Run status group 0 (all jobs): 00:09:51.004 READ: bw=85.0MiB/s (89.2MB/s), 9.95MiB/s-45.9MiB/s (10.4MB/s-48.1MB/s), io=85.5MiB (89.7MB), run=1002-1006msec 00:09:51.004 WRITE: bw=86.3MiB/s (90.5MB/s), 10.2MiB/s-46.3MiB/s (10.7MB/s-48.5MB/s), io=86.8MiB (91.1MB), run=1002-1006msec 00:09:51.004 00:09:51.004 Disk stats (read/write): 00:09:51.004 nvme0n1: ios=2098/2253, merge=0/0, ticks=12757/13551, in_queue=26308, util=84.35% 00:09:51.004 nvme0n2: ios=9728/9965, merge=0/0, ticks=17503/16483, in_queue=33986, util=85.29% 00:09:51.004 nvme0n3: ios=2048/2253, merge=0/0, ticks=12773/13538, in_queue=26311, util=88.36% 00:09:51.004 nvme0n4: ios=4096/4298, merge=0/0, ticks=25430/25874, in_queue=51304, util=89.50% 00:09:51.004 17:57:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:51.004 [global] 00:09:51.004 thread=1 00:09:51.004 invalidate=1 00:09:51.004 rw=randwrite 00:09:51.004 time_based=1 00:09:51.004 runtime=1 00:09:51.004 ioengine=libaio 00:09:51.004 direct=1 00:09:51.004 bs=4096 00:09:51.004 iodepth=128 00:09:51.004 norandommap=0 00:09:51.004 numjobs=1 00:09:51.004 00:09:51.004 verify_dump=1 00:09:51.004 verify_backlog=512 00:09:51.004 verify_state_save=0 00:09:51.004 do_verify=1 00:09:51.004 verify=crc32c-intel 00:09:51.004 [job0] 00:09:51.004 filename=/dev/nvme0n1 00:09:51.005 [job1] 00:09:51.005 filename=/dev/nvme0n2 00:09:51.005 [job2] 00:09:51.005 filename=/dev/nvme0n3 00:09:51.005 [job3] 00:09:51.005 filename=/dev/nvme0n4 00:09:51.005 Could not set queue depth (nvme0n1) 00:09:51.005 Could not set queue depth (nvme0n2) 00:09:51.005 Could not set queue depth (nvme0n3) 00:09:51.005 Could not set queue depth (nvme0n4) 00:09:51.005 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.005 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.005 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.005 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.005 fio-3.35 00:09:51.005 Starting 4 threads 00:09:52.374 00:09:52.374 job0: (groupid=0, jobs=1): err= 0: pid=2250706: Mon Dec 9 17:58:00 2024 00:09:52.374 read: IOPS=4982, BW=19.5MiB/s (20.4MB/s)(19.5MiB/1003msec) 00:09:52.374 slat (usec): min=2, max=2937, avg=98.72, stdev=340.26 00:09:52.374 clat (usec): min=2599, max=15699, avg=12698.68, stdev=936.91 00:09:52.374 lat (usec): min=3127, max=15707, avg=12797.39, stdev=922.13 00:09:52.374 clat percentiles (usec): 00:09:52.374 | 1.00th=[ 8848], 5.00th=[11863], 10.00th=[12125], 20.00th=[12387], 00:09:52.374 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:09:52.374 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13435], 95.00th=[13698], 00:09:52.374 | 99.00th=[14615], 99.50th=[15008], 99.90th=[15270], 99.95th=[15270], 00:09:52.374 | 99.99th=[15664] 00:09:52.374 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:09:52.374 slat (usec): min=2, max=2776, avg=94.54, stdev=329.79 00:09:52.374 clat (usec): min=8970, max=15347, avg=12389.81, stdev=544.38 00:09:52.374 lat (usec): min=8979, max=15372, avg=12484.35, stdev=494.56 00:09:52.374 clat percentiles (usec): 00:09:52.374 | 1.00th=[10552], 5.00th=[11600], 10.00th=[11731], 20.00th=[11994], 00:09:52.374 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:09:52.374 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13042], 95.00th=[13173], 00:09:52.374 | 99.00th=[13566], 99.50th=[13698], 99.90th=[14353], 99.95th=[14746], 00:09:52.374 | 99.99th=[15401] 00:09:52.374 bw ( KiB/s): min=20480, max=20480, per=23.86%, avg=20480.00, stdev= 0.00, samples=2 00:09:52.374 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:09:52.374 lat (msec) : 4=0.08%, 10=0.75%, 20=99.17% 00:09:52.374 cpu : usr=2.79%, sys=5.29%, ctx=1011, majf=0, minf=1 00:09:52.374 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:52.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.374 issued rwts: total=4997,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.374 job1: (groupid=0, jobs=1): err= 0: pid=2250723: Mon Dec 9 17:58:00 2024 00:09:52.374 read: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(25.9MiB/1003msec) 00:09:52.374 slat (usec): min=2, max=2318, avg=75.05, stdev=265.05 00:09:52.374 clat (usec): min=1826, max=12410, avg=9702.65, stdev=678.67 00:09:52.374 lat (usec): min=3297, max=12417, avg=9777.70, stdev=698.08 00:09:52.374 clat percentiles (usec): 00:09:52.374 | 1.00th=[ 7570], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9372], 00:09:52.374 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9765], 00:09:52.374 | 70.00th=[ 9896], 80.00th=[10028], 90.00th=[10290], 95.00th=[10552], 00:09:52.374 | 99.00th=[11338], 99.50th=[11469], 99.90th=[11731], 99.95th=[11863], 00:09:52.374 | 99.99th=[12387] 00:09:52.374 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:09:52.374 slat (usec): min=2, max=2430, avg=71.49, stdev=250.66 00:09:52.374 clat (usec): min=7173, max=11981, avg=9406.40, stdev=475.17 00:09:52.374 lat (usec): min=7182, max=11992, avg=9477.90, stdev=502.89 00:09:52.374 clat percentiles (usec): 00:09:52.374 | 1.00th=[ 8455], 5.00th=[ 8717], 10.00th=[ 8848], 20.00th=[ 9110], 00:09:52.374 | 30.00th=[ 9241], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9503], 00:09:52.374 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[ 9896], 95.00th=[10290], 00:09:52.374 | 99.00th=[10945], 99.50th=[11076], 99.90th=[11600], 99.95th=[11731], 00:09:52.374 | 99.99th=[11994] 00:09:52.374 bw ( KiB/s): min=25616, max=27632, per=31.01%, avg=26624.00, stdev=1425.53, samples=2 00:09:52.374 iops : min= 6404, max= 6908, avg=6656.00, stdev=356.38, samples=2 00:09:52.374 lat (msec) : 2=0.01%, 4=0.11%, 10=84.62%, 20=15.26% 00:09:52.374 cpu : usr=3.89%, sys=5.49%, ctx=1170, majf=0, minf=1 00:09:52.374 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:52.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.374 issued rwts: total=6636,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.374 job2: (groupid=0, jobs=1): err= 0: pid=2250745: Mon Dec 9 17:58:00 2024 00:09:52.374 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:09:52.374 slat (usec): min=2, max=1803, avg=105.11, stdev=277.04 00:09:52.374 clat (usec): min=12006, max=16088, avg=13737.09, stdev=383.05 00:09:52.375 lat (usec): min=12444, max=16096, avg=13842.20, stdev=354.31 00:09:52.375 clat percentiles (usec): 00:09:52.375 | 1.00th=[12780], 5.00th=[12911], 10.00th=[13042], 20.00th=[13566], 00:09:52.375 | 30.00th=[13698], 40.00th=[13698], 50.00th=[13829], 60.00th=[13829], 00:09:52.375 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14091], 95.00th=[14222], 00:09:52.375 | 99.00th=[14484], 99.50th=[14746], 99.90th=[15270], 99.95th=[15401], 00:09:52.375 | 99.99th=[16057] 00:09:52.375 write: IOPS=4891, BW=19.1MiB/s (20.0MB/s)(19.2MiB/1005msec); 0 zone resets 00:09:52.375 slat (usec): min=2, max=1704, avg=100.94, stdev=263.91 00:09:52.375 clat (usec): min=4722, max=18202, avg=13016.59, stdev=1127.99 00:09:52.375 lat (usec): min=5474, max=18786, avg=13117.53, stdev=1125.86 00:09:52.375 clat percentiles (usec): 00:09:52.375 | 1.00th=[ 8094], 5.00th=[12125], 10.00th=[12256], 20.00th=[12649], 00:09:52.375 | 30.00th=[12911], 40.00th=[12911], 50.00th=[13042], 60.00th=[13042], 00:09:52.375 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[14484], 00:09:52.375 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17695], 99.95th=[17695], 00:09:52.375 | 99.99th=[18220] 00:09:52.375 bw ( KiB/s): min=17832, max=20480, per=22.31%, avg=19156.00, stdev=1872.42, samples=2 00:09:52.375 iops : min= 4458, max= 5120, avg=4789.00, stdev=468.10, samples=2 00:09:52.375 lat (msec) : 10=0.78%, 20=99.22% 00:09:52.375 cpu : usr=2.49%, sys=4.98%, ctx=1316, majf=0, minf=1 00:09:52.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:52.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.375 issued rwts: total=4608,4916,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.375 job3: (groupid=0, jobs=1): err= 0: pid=2250752: Mon Dec 9 17:58:00 2024 00:09:52.375 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:09:52.375 slat (usec): min=2, max=1760, avg=106.00, stdev=273.23 00:09:52.375 clat (usec): min=12224, max=15899, avg=13782.56, stdev=340.88 00:09:52.375 lat (usec): min=12543, max=15961, avg=13888.56, stdev=347.44 00:09:52.375 clat percentiles (usec): 00:09:52.375 | 1.00th=[12780], 5.00th=[13042], 10.00th=[13304], 20.00th=[13566], 00:09:52.375 | 30.00th=[13698], 40.00th=[13829], 50.00th=[13829], 60.00th=[13960], 00:09:52.375 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14091], 95.00th=[14222], 00:09:52.375 | 99.00th=[14484], 99.50th=[14615], 99.90th=[15008], 99.95th=[15139], 00:09:52.375 | 99.99th=[15926] 00:09:52.375 write: IOPS=4853, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1005msec); 0 zone resets 00:09:52.375 slat (usec): min=2, max=1582, avg=100.35, stdev=257.87 00:09:52.375 clat (usec): min=4638, max=18687, avg=13076.40, stdev=1080.28 00:09:52.375 lat (usec): min=5468, max=18697, avg=13176.74, stdev=1088.52 00:09:52.375 clat percentiles (usec): 00:09:52.375 | 1.00th=[ 8094], 5.00th=[12125], 10.00th=[12387], 20.00th=[12780], 00:09:52.375 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13042], 60.00th=[13173], 00:09:52.375 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[14615], 00:09:52.375 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:09:52.375 | 99.99th=[18744] 00:09:52.375 bw ( KiB/s): min=17528, max=20480, per=22.14%, avg=19004.00, stdev=2087.38, samples=2 00:09:52.375 iops : min= 4382, max= 5120, avg=4751.00, stdev=521.84, samples=2 00:09:52.375 lat (msec) : 10=0.98%, 20=99.02% 00:09:52.375 cpu : usr=3.09%, sys=5.18%, ctx=1258, majf=0, minf=2 00:09:52.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:52.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.375 issued rwts: total=4608,4878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.375 00:09:52.375 Run status group 0 (all jobs): 00:09:52.375 READ: bw=81.0MiB/s (85.0MB/s), 17.9MiB/s-25.8MiB/s (18.8MB/s-27.1MB/s), io=81.4MiB (85.4MB), run=1003-1005msec 00:09:52.375 WRITE: bw=83.8MiB/s (87.9MB/s), 19.0MiB/s-25.9MiB/s (19.9MB/s-27.2MB/s), io=84.3MiB (88.3MB), run=1003-1005msec 00:09:52.375 00:09:52.375 Disk stats (read/write): 00:09:52.375 nvme0n1: ios=4145/4319, merge=0/0, ticks=12887/13159, in_queue=26046, util=84.15% 00:09:52.375 nvme0n2: ios=5393/5632, merge=0/0, ticks=17098/17084, in_queue=34182, util=85.19% 00:09:52.375 nvme0n3: ios=3830/4096, merge=0/0, ticks=25864/25991, in_queue=51855, util=88.35% 00:09:52.375 nvme0n4: ios=3795/4096, merge=0/0, ticks=25806/26064, in_queue=51870, util=89.39% 00:09:52.375 17:58:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:52.375 17:58:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2250828 00:09:52.375 17:58:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:52.375 17:58:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:52.375 [global] 00:09:52.375 thread=1 00:09:52.375 invalidate=1 00:09:52.375 rw=read 00:09:52.375 time_based=1 00:09:52.375 runtime=10 00:09:52.375 ioengine=libaio 00:09:52.375 direct=1 00:09:52.375 bs=4096 00:09:52.375 iodepth=1 00:09:52.375 norandommap=1 00:09:52.375 numjobs=1 00:09:52.375 00:09:52.375 [job0] 00:09:52.375 filename=/dev/nvme0n1 00:09:52.375 [job1] 00:09:52.375 filename=/dev/nvme0n2 00:09:52.375 [job2] 00:09:52.375 filename=/dev/nvme0n3 00:09:52.375 [job3] 00:09:52.375 filename=/dev/nvme0n4 00:09:52.375 Could not set queue depth (nvme0n1) 00:09:52.375 Could not set queue depth (nvme0n2) 00:09:52.375 Could not set queue depth (nvme0n3) 00:09:52.375 Could not set queue depth (nvme0n4) 00:09:52.631 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.631 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.631 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.631 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.631 fio-3.35 00:09:52.631 Starting 4 threads 00:09:55.903 17:58:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:55.904 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=69083136, buflen=4096 00:09:55.904 fio: pid=2251193, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:55.904 17:58:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:55.904 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=79306752, buflen=4096 00:09:55.904 fio: pid=2251186, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:55.904 17:58:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:55.904 17:58:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:55.904 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=43900928, buflen=4096 00:09:55.904 fio: pid=2251151, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:55.904 17:58:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:55.904 17:58:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:56.161 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=43622400, buflen=4096 00:09:56.161 fio: pid=2251167, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:56.161 17:58:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:56.161 17:58:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:56.161 00:09:56.161 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2251151: Mon Dec 9 17:58:04 2024 00:09:56.161 read: IOPS=8897, BW=34.8MiB/s (36.4MB/s)(106MiB/3046msec) 00:09:56.161 slat (usec): min=6, max=18418, avg=10.63, stdev=146.59 00:09:56.161 clat (usec): min=51, max=537, avg=99.36, stdev=31.62 00:09:56.161 lat (usec): min=60, max=18518, avg=109.99, stdev=150.22 00:09:56.161 clat percentiles (usec): 00:09:56.161 | 1.00th=[ 60], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 78], 00:09:56.161 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 86], 00:09:56.161 | 70.00th=[ 108], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 155], 00:09:56.161 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 202], 99.95th=[ 210], 00:09:56.161 | 99.99th=[ 247] 00:09:56.161 bw ( KiB/s): min=24888, max=44200, per=32.55%, avg=36070.20, stdev=9997.95, samples=5 00:09:56.161 iops : min= 6222, max=11050, avg=9017.40, stdev=2499.34, samples=5 00:09:56.161 lat (usec) : 100=69.02%, 250=30.97%, 500=0.01%, 750=0.01% 00:09:56.161 cpu : usr=3.94%, sys=12.94%, ctx=27108, majf=0, minf=1 00:09:56.161 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.161 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.161 issued rwts: total=27103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.161 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.161 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2251167: Mon Dec 9 17:58:04 2024 00:09:56.161 read: IOPS=8287, BW=32.4MiB/s (33.9MB/s)(106MiB/3262msec) 00:09:56.161 slat (usec): min=4, max=15269, avg=11.65, stdev=177.86 00:09:56.161 clat (usec): min=45, max=21418, avg=106.78, stdev=134.21 00:09:56.161 lat (usec): min=54, max=21427, avg=118.43, stdev=222.66 00:09:56.161 clat percentiles (usec): 00:09:56.161 | 1.00th=[ 56], 5.00th=[ 60], 10.00th=[ 65], 20.00th=[ 81], 00:09:56.161 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 91], 60.00th=[ 98], 00:09:56.161 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 163], 00:09:56.161 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 208], 99.95th=[ 221], 00:09:56.161 | 99.99th=[ 277] 00:09:56.161 bw ( KiB/s): min=25160, max=40896, per=29.07%, avg=32213.33, stdev=7261.17, samples=6 00:09:56.161 iops : min= 6290, max=10224, avg=8053.33, stdev=1815.29, samples=6 00:09:56.161 lat (usec) : 50=0.02%, 100=60.88%, 250=39.08%, 500=0.01% 00:09:56.161 lat (msec) : 2=0.01%, 50=0.01% 00:09:56.161 cpu : usr=3.65%, sys=11.53%, ctx=27043, majf=0, minf=2 00:09:56.161 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.161 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.161 issued rwts: total=27035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.161 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.161 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2251186: Mon Dec 9 17:58:04 2024 00:09:56.161 read: IOPS=6834, BW=26.7MiB/s (28.0MB/s)(75.6MiB/2833msec) 00:09:56.161 slat (usec): min=3, max=13622, avg=10.96, stdev=125.43 00:09:56.161 clat (usec): min=72, max=25777, avg=132.82, stdev=185.91 00:09:56.161 lat (usec): min=81, max=25786, avg=143.78, stdev=224.09 00:09:56.161 clat percentiles (usec): 00:09:56.161 | 1.00th=[ 84], 5.00th=[ 89], 10.00th=[ 93], 20.00th=[ 101], 00:09:56.161 | 30.00th=[ 124], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 143], 00:09:56.161 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 165], 00:09:56.161 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 219], 99.95th=[ 227], 00:09:56.161 | 99.99th=[ 269] 00:09:56.161 bw ( KiB/s): min=24512, max=32720, per=24.67%, avg=27340.80, stdev=3190.31, samples=5 00:09:56.161 iops : min= 6128, max= 8180, avg=6835.20, stdev=797.58, samples=5 00:09:56.161 lat (usec) : 100=19.05%, 250=80.94%, 500=0.01% 00:09:56.161 lat (msec) : 50=0.01% 00:09:56.161 cpu : usr=3.35%, sys=10.56%, ctx=19366, majf=0, minf=2 00:09:56.161 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.161 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.161 issued rwts: total=19363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.162 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.162 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2251193: Mon Dec 9 17:58:04 2024 00:09:56.162 read: IOPS=6386, BW=24.9MiB/s (26.2MB/s)(65.9MiB/2641msec) 00:09:56.162 slat (nsec): min=8295, max=36469, avg=9083.22, stdev=880.59 00:09:56.162 clat (usec): min=78, max=271, avg=144.73, stdev=14.97 00:09:56.162 lat (usec): min=87, max=279, avg=153.81, stdev=14.93 00:09:56.162 clat percentiles (usec): 00:09:56.162 | 1.00th=[ 96], 5.00th=[ 121], 10.00th=[ 130], 20.00th=[ 137], 00:09:56.162 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:09:56.162 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 167], 00:09:56.162 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 204], 99.95th=[ 215], 00:09:56.162 | 99.99th=[ 245] 00:09:56.162 bw ( KiB/s): min=24608, max=27024, per=23.36%, avg=25884.80, stdev=1080.45, samples=5 00:09:56.162 iops : min= 6152, max= 6756, avg=6471.20, stdev=270.11, samples=5 00:09:56.162 lat (usec) : 100=1.84%, 250=98.14%, 500=0.01% 00:09:56.162 cpu : usr=2.46%, sys=9.73%, ctx=16867, majf=0, minf=2 00:09:56.162 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.162 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.162 issued rwts: total=16867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.162 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.162 00:09:56.162 Run status group 0 (all jobs): 00:09:56.162 READ: bw=108MiB/s (113MB/s), 24.9MiB/s-34.8MiB/s (26.2MB/s-36.4MB/s), io=353MiB (370MB), run=2641-3262msec 00:09:56.162 00:09:56.162 Disk stats (read/write): 00:09:56.162 nvme0n1: ios=25188/0, merge=0/0, ticks=2308/0, in_queue=2308, util=93.86% 00:09:56.162 nvme0n2: ios=24838/0, merge=0/0, ticks=2529/0, in_queue=2529, util=93.62% 00:09:56.162 nvme0n3: ios=17882/0, merge=0/0, ticks=2201/0, in_queue=2201, util=96.03% 00:09:56.162 nvme0n4: ios=16681/0, merge=0/0, ticks=2245/0, in_queue=2245, util=96.42% 00:09:56.419 17:58:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:56.419 17:58:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:56.677 17:58:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:56.677 17:58:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:56.934 17:58:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:56.934 17:58:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:56.934 17:58:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:56.934 17:58:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:57.193 17:58:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:57.193 17:58:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2250828 00:09:57.193 17:58:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:57.193 17:58:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:58.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.124 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:58.124 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:58.124 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:58.124 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.124 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:58.124 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.124 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:58.124 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:58.124 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:58.124 nvmf hotplug test: fio failed as expected 00:09:58.124 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:58.382 rmmod nvme_rdma 00:09:58.382 rmmod nvme_fabrics 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2247838 ']' 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2247838 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2247838 ']' 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2247838 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.382 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2247838 00:09:58.640 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.640 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.640 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2247838' 00:09:58.640 killing process with pid 2247838 00:09:58.640 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2247838 00:09:58.640 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2247838 00:09:58.899 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.899 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:58.899 00:09:58.899 real 0m27.798s 00:09:58.899 user 2m10.634s 00:09:58.899 sys 0m10.880s 00:09:58.899 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.899 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.899 ************************************ 00:09:58.899 END TEST nvmf_fio_target 00:09:58.899 ************************************ 00:09:58.899 17:58:06 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:09:58.899 17:58:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:58.899 17:58:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.899 17:58:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.899 ************************************ 00:09:58.899 START TEST nvmf_bdevio 00:09:58.899 ************************************ 00:09:58.899 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:09:58.899 * Looking for test storage... 00:09:58.899 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:58.899 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:58.899 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:58.899 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:59.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.159 --rc genhtml_branch_coverage=1 00:09:59.159 --rc genhtml_function_coverage=1 00:09:59.159 --rc genhtml_legend=1 00:09:59.159 --rc geninfo_all_blocks=1 00:09:59.159 --rc geninfo_unexecuted_blocks=1 00:09:59.159 00:09:59.159 ' 00:09:59.159 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:59.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.160 --rc genhtml_branch_coverage=1 00:09:59.160 --rc genhtml_function_coverage=1 00:09:59.160 --rc genhtml_legend=1 00:09:59.160 --rc geninfo_all_blocks=1 00:09:59.160 --rc geninfo_unexecuted_blocks=1 00:09:59.160 00:09:59.160 ' 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:59.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.160 --rc genhtml_branch_coverage=1 00:09:59.160 --rc genhtml_function_coverage=1 00:09:59.160 --rc genhtml_legend=1 00:09:59.160 --rc geninfo_all_blocks=1 00:09:59.160 --rc geninfo_unexecuted_blocks=1 00:09:59.160 00:09:59.160 ' 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:59.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.160 --rc genhtml_branch_coverage=1 00:09:59.160 --rc genhtml_function_coverage=1 00:09:59.160 --rc genhtml_legend=1 00:09:59.160 --rc geninfo_all_blocks=1 00:09:59.160 --rc geninfo_unexecuted_blocks=1 00:09:59.160 00:09:59.160 ' 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.160 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:59.160 17:58:06 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:07.287 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:07.287 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:07.287 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:07.287 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:07.287 17:58:13 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:07.287 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:07.287 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:07.287 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:07.287 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:07.287 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:07.287 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:07.287 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:07.287 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:07.287 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:07.287 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:07.288 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:07.288 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:07.288 altname enp217s0f0np0 00:10:07.288 altname ens818f0np0 00:10:07.288 inet 192.168.100.8/24 scope global mlx_0_0 00:10:07.288 valid_lft forever preferred_lft forever 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:07.288 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:07.288 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:07.288 altname enp217s0f1np1 00:10:07.288 altname ens818f1np1 00:10:07.288 inet 192.168.100.9/24 scope global mlx_0_1 00:10:07.288 valid_lft forever preferred_lft forever 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:07.288 192.168.100.9' 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:07.288 192.168.100.9' 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:07.288 192.168.100.9' 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2255511 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2255511 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2255511 ']' 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.288 17:58:14 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.288 [2024-12-09 17:58:14.305500] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:10:07.288 [2024-12-09 17:58:14.305552] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.288 [2024-12-09 17:58:14.395395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:07.288 [2024-12-09 17:58:14.435329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.288 [2024-12-09 17:58:14.435369] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.288 [2024-12-09 17:58:14.435379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.288 [2024-12-09 17:58:14.435387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.288 [2024-12-09 17:58:14.435394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.288 [2024-12-09 17:58:14.437266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:07.289 [2024-12-09 17:58:14.437379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:07.289 [2024-12-09 17:58:14.437488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.289 [2024-12-09 17:58:14.437489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:07.289 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.289 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:07.289 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:07.289 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:07.289 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.289 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.289 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:07.289 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.289 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.289 [2024-12-09 17:58:15.218750] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa1d280/0xa21770) succeed. 00:10:07.289 [2024-12-09 17:58:15.228102] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa1e910/0xa62e10) succeed. 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.547 Malloc0 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.547 [2024-12-09 17:58:15.411962] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:07.547 { 00:10:07.547 "params": { 00:10:07.547 "name": "Nvme$subsystem", 00:10:07.547 "trtype": "$TEST_TRANSPORT", 00:10:07.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:07.547 "adrfam": "ipv4", 00:10:07.547 "trsvcid": "$NVMF_PORT", 00:10:07.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:07.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:07.547 "hdgst": ${hdgst:-false}, 00:10:07.547 "ddgst": ${ddgst:-false} 00:10:07.547 }, 00:10:07.547 "method": "bdev_nvme_attach_controller" 00:10:07.547 } 00:10:07.547 EOF 00:10:07.547 )") 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:07.547 17:58:15 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:07.547 "params": { 00:10:07.547 "name": "Nvme1", 00:10:07.547 "trtype": "rdma", 00:10:07.547 "traddr": "192.168.100.8", 00:10:07.547 "adrfam": "ipv4", 00:10:07.547 "trsvcid": "4420", 00:10:07.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:07.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:07.547 "hdgst": false, 00:10:07.547 "ddgst": false 00:10:07.547 }, 00:10:07.547 "method": "bdev_nvme_attach_controller" 00:10:07.547 }' 00:10:07.547 [2024-12-09 17:58:15.465214] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:10:07.547 [2024-12-09 17:58:15.465262] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2255793 ] 00:10:07.804 [2024-12-09 17:58:15.556782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:07.804 [2024-12-09 17:58:15.599385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.804 [2024-12-09 17:58:15.599498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.804 [2024-12-09 17:58:15.599499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.804 I/O targets: 00:10:07.804 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:07.804 00:10:07.804 00:10:07.804 CUnit - A unit testing framework for C - Version 2.1-3 00:10:07.804 http://cunit.sourceforge.net/ 00:10:07.804 00:10:07.804 00:10:07.804 Suite: bdevio tests on: Nvme1n1 00:10:08.062 Test: blockdev write read block ...passed 00:10:08.062 Test: blockdev write zeroes read block ...passed 00:10:08.062 Test: blockdev write zeroes read no split ...passed 00:10:08.062 Test: blockdev write zeroes read split ...passed 00:10:08.062 Test: blockdev write zeroes read split partial ...passed 00:10:08.062 Test: blockdev reset ...[2024-12-09 17:58:15.807259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:08.062 [2024-12-09 17:58:15.829681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:10:08.062 [2024-12-09 17:58:15.856730] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:08.062 passed 00:10:08.062 Test: blockdev write read 8 blocks ...passed 00:10:08.062 Test: blockdev write read size > 128k ...passed 00:10:08.062 Test: blockdev write read invalid size ...passed 00:10:08.062 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:08.062 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:08.062 Test: blockdev write read max offset ...passed 00:10:08.062 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:08.062 Test: blockdev writev readv 8 blocks ...passed 00:10:08.062 Test: blockdev writev readv 30 x 1block ...passed 00:10:08.062 Test: blockdev writev readv block ...passed 00:10:08.062 Test: blockdev writev readv size > 128k ...passed 00:10:08.062 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:08.062 Test: blockdev comparev and writev ...[2024-12-09 17:58:15.860145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:08.062 [2024-12-09 17:58:15.860174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:08.062 [2024-12-09 17:58:15.860187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:08.062 [2024-12-09 17:58:15.860197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:08.062 [2024-12-09 17:58:15.860375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:08.062 [2024-12-09 17:58:15.860387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:08.062 [2024-12-09 17:58:15.860397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:08.062 [2024-12-09 17:58:15.860406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:08.062 [2024-12-09 17:58:15.860583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:08.062 [2024-12-09 17:58:15.860593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:08.062 [2024-12-09 17:58:15.860603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:08.062 [2024-12-09 17:58:15.860612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:08.062 [2024-12-09 17:58:15.860777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:08.062 [2024-12-09 17:58:15.860788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:08.062 [2024-12-09 17:58:15.860799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:08.062 [2024-12-09 17:58:15.860807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:08.062 passed 00:10:08.062 Test: blockdev nvme passthru rw ...passed 00:10:08.062 Test: blockdev nvme passthru vendor specific ...[2024-12-09 17:58:15.861137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:08.062 [2024-12-09 17:58:15.861150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:08.062 [2024-12-09 17:58:15.861197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:08.062 [2024-12-09 17:58:15.861207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:08.062 [2024-12-09 17:58:15.861257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:08.062 [2024-12-09 17:58:15.861268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:08.062 [2024-12-09 17:58:15.861309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:10:08.062 [2024-12-09 17:58:15.861321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:08.062 passed 00:10:08.062 Test: blockdev nvme admin passthru ...passed 00:10:08.062 Test: blockdev copy ...passed 00:10:08.062 00:10:08.062 Run Summary: Type Total Ran Passed Failed Inactive 00:10:08.062 suites 1 1 n/a 0 0 00:10:08.062 tests 23 23 23 0 0 00:10:08.062 asserts 152 152 152 0 n/a 00:10:08.062 00:10:08.062 Elapsed time = 0.171 seconds 00:10:08.062 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:08.062 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.063 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:08.063 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.063 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:08.063 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:08.063 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:08.063 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:08.321 rmmod nvme_rdma 00:10:08.321 rmmod nvme_fabrics 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2255511 ']' 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2255511 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2255511 ']' 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2255511 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2255511 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2255511' 00:10:08.321 killing process with pid 2255511 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2255511 00:10:08.321 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2255511 00:10:08.581 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:08.581 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:08.581 00:10:08.581 real 0m9.714s 00:10:08.581 user 0m11.132s 00:10:08.581 sys 0m6.238s 00:10:08.581 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.581 17:58:16 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:08.581 ************************************ 00:10:08.581 END TEST nvmf_bdevio 00:10:08.581 ************************************ 00:10:08.581 17:58:16 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:08.581 00:10:08.581 real 4m23.787s 00:10:08.581 user 11m14.171s 00:10:08.581 sys 1m41.332s 00:10:08.581 17:58:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.581 17:58:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:08.581 ************************************ 00:10:08.581 END TEST nvmf_target_core 00:10:08.581 ************************************ 00:10:08.581 17:58:16 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:10:08.581 17:58:16 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:08.581 17:58:16 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.581 17:58:16 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:08.581 ************************************ 00:10:08.581 START TEST nvmf_target_extra 00:10:08.581 ************************************ 00:10:08.581 17:58:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:10:08.840 * Looking for test storage... 00:10:08.840 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.840 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:08.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.841 --rc genhtml_branch_coverage=1 00:10:08.841 --rc genhtml_function_coverage=1 00:10:08.841 --rc genhtml_legend=1 00:10:08.841 --rc geninfo_all_blocks=1 00:10:08.841 --rc geninfo_unexecuted_blocks=1 00:10:08.841 00:10:08.841 ' 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:08.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.841 --rc genhtml_branch_coverage=1 00:10:08.841 --rc genhtml_function_coverage=1 00:10:08.841 --rc genhtml_legend=1 00:10:08.841 --rc geninfo_all_blocks=1 00:10:08.841 --rc geninfo_unexecuted_blocks=1 00:10:08.841 00:10:08.841 ' 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:08.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.841 --rc genhtml_branch_coverage=1 00:10:08.841 --rc genhtml_function_coverage=1 00:10:08.841 --rc genhtml_legend=1 00:10:08.841 --rc geninfo_all_blocks=1 00:10:08.841 --rc geninfo_unexecuted_blocks=1 00:10:08.841 00:10:08.841 ' 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:08.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.841 --rc genhtml_branch_coverage=1 00:10:08.841 --rc genhtml_function_coverage=1 00:10:08.841 --rc genhtml_legend=1 00:10:08.841 --rc geninfo_all_blocks=1 00:10:08.841 --rc geninfo_unexecuted_blocks=1 00:10:08.841 00:10:08.841 ' 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.841 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.841 17:58:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:09.102 ************************************ 00:10:09.102 START TEST nvmf_example 00:10:09.102 ************************************ 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:10:09.102 * Looking for test storage... 00:10:09.102 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.102 17:58:16 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:09.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.102 --rc genhtml_branch_coverage=1 00:10:09.102 --rc genhtml_function_coverage=1 00:10:09.102 --rc genhtml_legend=1 00:10:09.102 --rc geninfo_all_blocks=1 00:10:09.102 --rc geninfo_unexecuted_blocks=1 00:10:09.102 00:10:09.102 ' 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:09.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.102 --rc genhtml_branch_coverage=1 00:10:09.102 --rc genhtml_function_coverage=1 00:10:09.102 --rc genhtml_legend=1 00:10:09.102 --rc geninfo_all_blocks=1 00:10:09.102 --rc geninfo_unexecuted_blocks=1 00:10:09.102 00:10:09.102 ' 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:09.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.102 --rc genhtml_branch_coverage=1 00:10:09.102 --rc genhtml_function_coverage=1 00:10:09.102 --rc genhtml_legend=1 00:10:09.102 --rc geninfo_all_blocks=1 00:10:09.102 --rc geninfo_unexecuted_blocks=1 00:10:09.102 00:10:09.102 ' 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:09.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.102 --rc genhtml_branch_coverage=1 00:10:09.102 --rc genhtml_function_coverage=1 00:10:09.102 --rc genhtml_legend=1 00:10:09.102 --rc geninfo_all_blocks=1 00:10:09.102 --rc geninfo_unexecuted_blocks=1 00:10:09.102 00:10:09.102 ' 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.102 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:09.103 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:09.103 17:58:17 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.252 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:17.253 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:17.253 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:17.253 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:17.253 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:17.253 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:17.254 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:17.254 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:17.254 altname enp217s0f0np0 00:10:17.254 altname ens818f0np0 00:10:17.254 inet 192.168.100.8/24 scope global mlx_0_0 00:10:17.254 valid_lft forever preferred_lft forever 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:17.254 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:17.254 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:17.254 altname enp217s0f1np1 00:10:17.254 altname ens818f1np1 00:10:17.254 inet 192.168.100.9/24 scope global mlx_0_1 00:10:17.254 valid_lft forever preferred_lft forever 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:17.254 192.168.100.9' 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:17.254 192.168.100.9' 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:17.254 192.168.100.9' 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2259547 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2259547 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2259547 ']' 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.254 17:58:24 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.511 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.511 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:17.511 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:17.512 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.512 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.512 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:17.512 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.512 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:17.769 17:58:25 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:29.967 Initializing NVMe Controllers 00:10:29.967 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:29.967 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:29.967 Initialization complete. Launching workers. 00:10:29.967 ======================================================== 00:10:29.967 Latency(us) 00:10:29.967 Device Information : IOPS MiB/s Average min max 00:10:29.967 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26498.50 103.51 2414.66 633.25 12100.62 00:10:29.967 ======================================================== 00:10:29.967 Total : 26498.50 103.51 2414.66 633.25 12100.62 00:10:29.967 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:29.967 rmmod nvme_rdma 00:10:29.967 rmmod nvme_fabrics 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2259547 ']' 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2259547 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2259547 ']' 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2259547 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2259547 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2259547' 00:10:29.967 killing process with pid 2259547 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2259547 00:10:29.967 17:58:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2259547 00:10:29.967 nvmf threads initialize successfully 00:10:29.967 bdev subsystem init successfully 00:10:29.967 created a nvmf target service 00:10:29.967 create targets's poll groups done 00:10:29.967 all subsystems of target started 00:10:29.967 nvmf target is running 00:10:29.967 all subsystems of target stopped 00:10:29.967 destroy targets's poll groups done 00:10:29.967 destroyed the nvmf target service 00:10:29.967 bdev subsystem finish successfully 00:10:29.967 nvmf threads destroy successfully 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:29.967 00:10:29.967 real 0m20.387s 00:10:29.967 user 0m52.455s 00:10:29.967 sys 0m6.149s 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:29.967 ************************************ 00:10:29.967 END TEST nvmf_example 00:10:29.967 ************************************ 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:29.967 ************************************ 00:10:29.967 START TEST nvmf_filesystem 00:10:29.967 ************************************ 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:10:29.967 * Looking for test storage... 00:10:29.967 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.967 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:29.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.968 --rc genhtml_branch_coverage=1 00:10:29.968 --rc genhtml_function_coverage=1 00:10:29.968 --rc genhtml_legend=1 00:10:29.968 --rc geninfo_all_blocks=1 00:10:29.968 --rc geninfo_unexecuted_blocks=1 00:10:29.968 00:10:29.968 ' 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:29.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.968 --rc genhtml_branch_coverage=1 00:10:29.968 --rc genhtml_function_coverage=1 00:10:29.968 --rc genhtml_legend=1 00:10:29.968 --rc geninfo_all_blocks=1 00:10:29.968 --rc geninfo_unexecuted_blocks=1 00:10:29.968 00:10:29.968 ' 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:29.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.968 --rc genhtml_branch_coverage=1 00:10:29.968 --rc genhtml_function_coverage=1 00:10:29.968 --rc genhtml_legend=1 00:10:29.968 --rc geninfo_all_blocks=1 00:10:29.968 --rc geninfo_unexecuted_blocks=1 00:10:29.968 00:10:29.968 ' 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:29.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.968 --rc genhtml_branch_coverage=1 00:10:29.968 --rc genhtml_function_coverage=1 00:10:29.968 --rc genhtml_legend=1 00:10:29.968 --rc geninfo_all_blocks=1 00:10:29.968 --rc geninfo_unexecuted_blocks=1 00:10:29.968 00:10:29.968 ' 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:29.968 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:29.969 #define SPDK_CONFIG_H 00:10:29.969 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:29.969 #define SPDK_CONFIG_APPS 1 00:10:29.969 #define SPDK_CONFIG_ARCH native 00:10:29.969 #undef SPDK_CONFIG_ASAN 00:10:29.969 #undef SPDK_CONFIG_AVAHI 00:10:29.969 #undef SPDK_CONFIG_CET 00:10:29.969 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:29.969 #define SPDK_CONFIG_COVERAGE 1 00:10:29.969 #define SPDK_CONFIG_CROSS_PREFIX 00:10:29.969 #undef SPDK_CONFIG_CRYPTO 00:10:29.969 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:29.969 #undef SPDK_CONFIG_CUSTOMOCF 00:10:29.969 #undef SPDK_CONFIG_DAOS 00:10:29.969 #define SPDK_CONFIG_DAOS_DIR 00:10:29.969 #define SPDK_CONFIG_DEBUG 1 00:10:29.969 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:29.969 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:10:29.969 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:29.969 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:29.969 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:29.969 #undef SPDK_CONFIG_DPDK_UADK 00:10:29.969 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:10:29.969 #define SPDK_CONFIG_EXAMPLES 1 00:10:29.969 #undef SPDK_CONFIG_FC 00:10:29.969 #define SPDK_CONFIG_FC_PATH 00:10:29.969 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:29.969 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:29.969 #define SPDK_CONFIG_FSDEV 1 00:10:29.969 #undef SPDK_CONFIG_FUSE 00:10:29.969 #undef SPDK_CONFIG_FUZZER 00:10:29.969 #define SPDK_CONFIG_FUZZER_LIB 00:10:29.969 #undef SPDK_CONFIG_GOLANG 00:10:29.969 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:29.969 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:29.969 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:29.969 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:29.969 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:29.969 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:29.969 #undef SPDK_CONFIG_HAVE_LZ4 00:10:29.969 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:29.969 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:29.969 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:29.969 #define SPDK_CONFIG_IDXD 1 00:10:29.969 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:29.969 #undef SPDK_CONFIG_IPSEC_MB 00:10:29.969 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:29.969 #define SPDK_CONFIG_ISAL 1 00:10:29.969 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:29.969 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:29.969 #define SPDK_CONFIG_LIBDIR 00:10:29.969 #undef SPDK_CONFIG_LTO 00:10:29.969 #define SPDK_CONFIG_MAX_LCORES 128 00:10:29.969 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:29.969 #define SPDK_CONFIG_NVME_CUSE 1 00:10:29.969 #undef SPDK_CONFIG_OCF 00:10:29.969 #define SPDK_CONFIG_OCF_PATH 00:10:29.969 #define SPDK_CONFIG_OPENSSL_PATH 00:10:29.969 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:29.969 #define SPDK_CONFIG_PGO_DIR 00:10:29.969 #undef SPDK_CONFIG_PGO_USE 00:10:29.969 #define SPDK_CONFIG_PREFIX /usr/local 00:10:29.969 #undef SPDK_CONFIG_RAID5F 00:10:29.969 #undef SPDK_CONFIG_RBD 00:10:29.969 #define SPDK_CONFIG_RDMA 1 00:10:29.969 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:29.969 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:29.969 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:29.969 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:29.969 #define SPDK_CONFIG_SHARED 1 00:10:29.969 #undef SPDK_CONFIG_SMA 00:10:29.969 #define SPDK_CONFIG_TESTS 1 00:10:29.969 #undef SPDK_CONFIG_TSAN 00:10:29.969 #define SPDK_CONFIG_UBLK 1 00:10:29.969 #define SPDK_CONFIG_UBSAN 1 00:10:29.969 #undef SPDK_CONFIG_UNIT_TESTS 00:10:29.969 #undef SPDK_CONFIG_URING 00:10:29.969 #define SPDK_CONFIG_URING_PATH 00:10:29.969 #undef SPDK_CONFIG_URING_ZNS 00:10:29.969 #undef SPDK_CONFIG_USDT 00:10:29.969 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:29.969 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:29.969 #undef SPDK_CONFIG_VFIO_USER 00:10:29.969 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:29.969 #define SPDK_CONFIG_VHOST 1 00:10:29.969 #define SPDK_CONFIG_VIRTIO 1 00:10:29.969 #undef SPDK_CONFIG_VTUNE 00:10:29.969 #define SPDK_CONFIG_VTUNE_DIR 00:10:29.969 #define SPDK_CONFIG_WERROR 1 00:10:29.969 #define SPDK_CONFIG_WPDK_DIR 00:10:29.969 #undef SPDK_CONFIG_XNVME 00:10:29.969 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:10:29.969 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:29.970 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:29.971 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=rdma 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2261760 ]] 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2261760 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ixeKm5 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ixeKm5/tests/target /tmp/spdk.ixeKm5 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=60325076992 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67015421952 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6690344960 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=33494249472 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=33507708928 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=13459456 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13380014080 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=13403086848 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23072768 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=33507299328 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=33507713024 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=413696 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6701527040 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6701539328 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:29.972 * Looking for test storage... 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=60325076992 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8904937472 00:10:29.972 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:29.973 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:29.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.973 --rc genhtml_branch_coverage=1 00:10:29.973 --rc genhtml_function_coverage=1 00:10:29.973 --rc genhtml_legend=1 00:10:29.973 --rc geninfo_all_blocks=1 00:10:29.973 --rc geninfo_unexecuted_blocks=1 00:10:29.973 00:10:29.973 ' 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:29.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.973 --rc genhtml_branch_coverage=1 00:10:29.973 --rc genhtml_function_coverage=1 00:10:29.973 --rc genhtml_legend=1 00:10:29.973 --rc geninfo_all_blocks=1 00:10:29.973 --rc geninfo_unexecuted_blocks=1 00:10:29.973 00:10:29.973 ' 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:29.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.973 --rc genhtml_branch_coverage=1 00:10:29.973 --rc genhtml_function_coverage=1 00:10:29.973 --rc genhtml_legend=1 00:10:29.973 --rc geninfo_all_blocks=1 00:10:29.973 --rc geninfo_unexecuted_blocks=1 00:10:29.973 00:10:29.973 ' 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:29.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.973 --rc genhtml_branch_coverage=1 00:10:29.973 --rc genhtml_function_coverage=1 00:10:29.973 --rc genhtml_legend=1 00:10:29.973 --rc geninfo_all_blocks=1 00:10:29.973 --rc geninfo_unexecuted_blocks=1 00:10:29.973 00:10:29.973 ' 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.973 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:29.974 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:29.974 17:58:37 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:38.098 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:38.099 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:38.099 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:38.099 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:38.099 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:38.099 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:38.099 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:38.099 altname enp217s0f0np0 00:10:38.099 altname ens818f0np0 00:10:38.099 inet 192.168.100.8/24 scope global mlx_0_0 00:10:38.099 valid_lft forever preferred_lft forever 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:38.099 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:38.100 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:38.100 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:38.100 altname enp217s0f1np1 00:10:38.100 altname ens818f1np1 00:10:38.100 inet 192.168.100.9/24 scope global mlx_0_1 00:10:38.100 valid_lft forever preferred_lft forever 00:10:38.100 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:38.100 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:38.100 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:38.100 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:38.100 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:38.100 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:38.100 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:38.100 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:38.100 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:38.100 17:58:44 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:38.100 192.168.100.9' 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:38.100 192.168.100.9' 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:38.100 192.168.100.9' 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:38.100 ************************************ 00:10:38.100 START TEST nvmf_filesystem_no_in_capsule 00:10:38.100 ************************************ 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2265187 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2265187 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2265187 ']' 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.100 17:58:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.100 [2024-12-09 17:58:45.189177] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:10:38.100 [2024-12-09 17:58:45.189229] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.100 [2024-12-09 17:58:45.280448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.100 [2024-12-09 17:58:45.320780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.100 [2024-12-09 17:58:45.320823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.100 [2024-12-09 17:58:45.320832] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.100 [2024-12-09 17:58:45.320840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.100 [2024-12-09 17:58:45.320847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.100 [2024-12-09 17:58:45.322456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.100 [2024-12-09 17:58:45.322567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.100 [2024-12-09 17:58:45.322680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.100 [2024-12-09 17:58:45.322678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.100 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.100 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:38.100 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:38.100 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:38.100 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.358 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.359 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:38.359 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:10:38.359 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.359 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.359 [2024-12-09 17:58:46.086933] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:10:38.359 [2024-12-09 17:58:46.108856] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13cb980/0x13cfe70) succeed. 00:10:38.359 [2024-12-09 17:58:46.118168] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13cd010/0x1411510) succeed. 00:10:38.359 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.359 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:38.359 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.359 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.617 Malloc1 00:10:38.617 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.617 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:38.617 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.617 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.617 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.617 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:38.617 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.617 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.617 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.617 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:38.617 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.617 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.617 [2024-12-09 17:58:46.394405] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:38.617 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.618 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:38.618 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:38.618 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:38.618 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:38.618 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:38.618 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:38.618 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.618 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.618 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.618 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:38.618 { 00:10:38.618 "name": "Malloc1", 00:10:38.618 "aliases": [ 00:10:38.618 "a04892b3-bd90-409f-99b1-a58167fc40e0" 00:10:38.618 ], 00:10:38.618 "product_name": "Malloc disk", 00:10:38.618 "block_size": 512, 00:10:38.618 "num_blocks": 1048576, 00:10:38.618 "uuid": "a04892b3-bd90-409f-99b1-a58167fc40e0", 00:10:38.618 "assigned_rate_limits": { 00:10:38.618 "rw_ios_per_sec": 0, 00:10:38.618 "rw_mbytes_per_sec": 0, 00:10:38.618 "r_mbytes_per_sec": 0, 00:10:38.618 "w_mbytes_per_sec": 0 00:10:38.618 }, 00:10:38.618 "claimed": true, 00:10:38.618 "claim_type": "exclusive_write", 00:10:38.618 "zoned": false, 00:10:38.618 "supported_io_types": { 00:10:38.618 "read": true, 00:10:38.618 "write": true, 00:10:38.618 "unmap": true, 00:10:38.618 "flush": true, 00:10:38.618 "reset": true, 00:10:38.618 "nvme_admin": false, 00:10:38.618 "nvme_io": false, 00:10:38.618 "nvme_io_md": false, 00:10:38.618 "write_zeroes": true, 00:10:38.618 "zcopy": true, 00:10:38.618 "get_zone_info": false, 00:10:38.618 "zone_management": false, 00:10:38.618 "zone_append": false, 00:10:38.618 "compare": false, 00:10:38.618 "compare_and_write": false, 00:10:38.618 "abort": true, 00:10:38.618 "seek_hole": false, 00:10:38.618 "seek_data": false, 00:10:38.618 "copy": true, 00:10:38.618 "nvme_iov_md": false 00:10:38.618 }, 00:10:38.618 "memory_domains": [ 00:10:38.618 { 00:10:38.618 "dma_device_id": "system", 00:10:38.618 "dma_device_type": 1 00:10:38.618 }, 00:10:38.618 { 00:10:38.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.618 "dma_device_type": 2 00:10:38.618 } 00:10:38.618 ], 00:10:38.618 "driver_specific": {} 00:10:38.618 } 00:10:38.618 ]' 00:10:38.618 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:38.618 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:38.618 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:38.618 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:38.618 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:38.618 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:38.618 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:38.618 17:58:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:39.551 17:58:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:39.551 17:58:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:39.551 17:58:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:39.551 17:58:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:39.551 17:58:47 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:42.110 17:58:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.044 ************************************ 00:10:43.044 START TEST filesystem_ext4 00:10:43.044 ************************************ 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:43.044 mke2fs 1.47.0 (5-Feb-2023) 00:10:43.044 Discarding device blocks: 0/522240 done 00:10:43.044 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:43.044 Filesystem UUID: bea266d2-8c9b-4f7f-a61c-0ed9d69ae58c 00:10:43.044 Superblock backups stored on blocks: 00:10:43.044 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:43.044 00:10:43.044 Allocating group tables: 0/64 done 00:10:43.044 Writing inode tables: 0/64 done 00:10:43.044 Creating journal (8192 blocks): done 00:10:43.044 Writing superblocks and filesystem accounting information: 0/64 done 00:10:43.044 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2265187 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:43.044 00:10:43.044 real 0m0.199s 00:10:43.044 user 0m0.031s 00:10:43.044 sys 0m0.078s 00:10:43.044 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.045 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:43.045 ************************************ 00:10:43.045 END TEST filesystem_ext4 00:10:43.045 ************************************ 00:10:43.045 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:43.045 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:43.045 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.045 17:58:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.303 ************************************ 00:10:43.303 START TEST filesystem_btrfs 00:10:43.303 ************************************ 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:43.303 btrfs-progs v6.8.1 00:10:43.303 See https://btrfs.readthedocs.io for more information. 00:10:43.303 00:10:43.303 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:43.303 NOTE: several default settings have changed in version 5.15, please make sure 00:10:43.303 this does not affect your deployments: 00:10:43.303 - DUP for metadata (-m dup) 00:10:43.303 - enabled no-holes (-O no-holes) 00:10:43.303 - enabled free-space-tree (-R free-space-tree) 00:10:43.303 00:10:43.303 Label: (null) 00:10:43.303 UUID: 7e1dd00f-8216-45e8-a80b-abc3feba911b 00:10:43.303 Node size: 16384 00:10:43.303 Sector size: 4096 (CPU page size: 4096) 00:10:43.303 Filesystem size: 510.00MiB 00:10:43.303 Block group profiles: 00:10:43.303 Data: single 8.00MiB 00:10:43.303 Metadata: DUP 32.00MiB 00:10:43.303 System: DUP 8.00MiB 00:10:43.303 SSD detected: yes 00:10:43.303 Zoned device: no 00:10:43.303 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:43.303 Checksum: crc32c 00:10:43.303 Number of devices: 1 00:10:43.303 Devices: 00:10:43.303 ID SIZE PATH 00:10:43.303 1 510.00MiB /dev/nvme0n1p1 00:10:43.303 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2265187 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:43.303 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:43.562 00:10:43.562 real 0m0.253s 00:10:43.562 user 0m0.031s 00:10:43.562 sys 0m0.126s 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:43.562 ************************************ 00:10:43.562 END TEST filesystem_btrfs 00:10:43.562 ************************************ 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.562 ************************************ 00:10:43.562 START TEST filesystem_xfs 00:10:43.562 ************************************ 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:43.562 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:43.562 = sectsz=512 attr=2, projid32bit=1 00:10:43.562 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:43.562 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:43.562 data = bsize=4096 blocks=130560, imaxpct=25 00:10:43.562 = sunit=0 swidth=0 blks 00:10:43.562 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:43.562 log =internal log bsize=4096 blocks=16384, version=2 00:10:43.562 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:43.562 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:43.562 Discarding blocks...Done. 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:43.562 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:43.820 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:43.820 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:43.820 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2265187 00:10:43.820 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:43.820 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:43.820 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:43.820 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:43.820 00:10:43.820 real 0m0.219s 00:10:43.820 user 0m0.034s 00:10:43.820 sys 0m0.077s 00:10:43.820 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.820 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:43.820 ************************************ 00:10:43.820 END TEST filesystem_xfs 00:10:43.820 ************************************ 00:10:43.820 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:43.820 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:43.820 17:58:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:44.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2265187 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2265187 ']' 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2265187 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2265187 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2265187' 00:10:44.754 killing process with pid 2265187 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2265187 00:10:44.754 17:58:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2265187 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:45.321 00:10:45.321 real 0m7.953s 00:10:45.321 user 0m31.179s 00:10:45.321 sys 0m1.227s 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.321 ************************************ 00:10:45.321 END TEST nvmf_filesystem_no_in_capsule 00:10:45.321 ************************************ 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.321 ************************************ 00:10:45.321 START TEST nvmf_filesystem_in_capsule 00:10:45.321 ************************************ 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2266743 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2266743 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2266743 ']' 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.321 17:58:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.321 [2024-12-09 17:58:53.230060] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:10:45.321 [2024-12-09 17:58:53.230109] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.580 [2024-12-09 17:58:53.321678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.580 [2024-12-09 17:58:53.361325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.580 [2024-12-09 17:58:53.361364] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.580 [2024-12-09 17:58:53.361374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.580 [2024-12-09 17:58:53.361382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.580 [2024-12-09 17:58:53.361389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.580 [2024-12-09 17:58:53.363011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.580 [2024-12-09 17:58:53.363133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.580 [2024-12-09 17:58:53.363242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.580 [2024-12-09 17:58:53.363243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.146 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.146 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:46.146 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:46.146 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:46.146 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.404 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.404 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:46.404 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:10:46.404 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.404 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.404 [2024-12-09 17:58:54.157079] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2040980/0x2044e70) succeed. 00:10:46.404 [2024-12-09 17:58:54.166325] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2042010/0x2086510) succeed. 00:10:46.404 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.404 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:46.404 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.404 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 Malloc1 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 [2024-12-09 17:58:54.446666] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:46.662 { 00:10:46.662 "name": "Malloc1", 00:10:46.662 "aliases": [ 00:10:46.662 "2cf0a11e-97f8-4014-be31-eb3e28a39cc6" 00:10:46.662 ], 00:10:46.662 "product_name": "Malloc disk", 00:10:46.662 "block_size": 512, 00:10:46.662 "num_blocks": 1048576, 00:10:46.662 "uuid": "2cf0a11e-97f8-4014-be31-eb3e28a39cc6", 00:10:46.662 "assigned_rate_limits": { 00:10:46.662 "rw_ios_per_sec": 0, 00:10:46.662 "rw_mbytes_per_sec": 0, 00:10:46.662 "r_mbytes_per_sec": 0, 00:10:46.662 "w_mbytes_per_sec": 0 00:10:46.662 }, 00:10:46.662 "claimed": true, 00:10:46.662 "claim_type": "exclusive_write", 00:10:46.662 "zoned": false, 00:10:46.662 "supported_io_types": { 00:10:46.662 "read": true, 00:10:46.662 "write": true, 00:10:46.662 "unmap": true, 00:10:46.662 "flush": true, 00:10:46.662 "reset": true, 00:10:46.662 "nvme_admin": false, 00:10:46.662 "nvme_io": false, 00:10:46.662 "nvme_io_md": false, 00:10:46.662 "write_zeroes": true, 00:10:46.662 "zcopy": true, 00:10:46.662 "get_zone_info": false, 00:10:46.662 "zone_management": false, 00:10:46.662 "zone_append": false, 00:10:46.662 "compare": false, 00:10:46.662 "compare_and_write": false, 00:10:46.662 "abort": true, 00:10:46.662 "seek_hole": false, 00:10:46.662 "seek_data": false, 00:10:46.662 "copy": true, 00:10:46.662 "nvme_iov_md": false 00:10:46.662 }, 00:10:46.662 "memory_domains": [ 00:10:46.662 { 00:10:46.662 "dma_device_id": "system", 00:10:46.662 "dma_device_type": 1 00:10:46.662 }, 00:10:46.662 { 00:10:46.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.662 "dma_device_type": 2 00:10:46.662 } 00:10:46.662 ], 00:10:46.662 "driver_specific": {} 00:10:46.662 } 00:10:46.662 ]' 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:46.662 17:58:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:47.595 17:58:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:47.595 17:58:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:47.595 17:58:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:47.595 17:58:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:47.595 17:58:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:50.122 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:50.123 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:50.123 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.123 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:50.123 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.123 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:50.123 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:50.123 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:50.123 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:50.123 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:50.123 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:50.123 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:50.123 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:50.123 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:50.123 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:50.123 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:50.123 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:50.123 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:50.123 17:58:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.054 ************************************ 00:10:51.054 START TEST filesystem_in_capsule_ext4 00:10:51.054 ************************************ 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:51.054 mke2fs 1.47.0 (5-Feb-2023) 00:10:51.054 Discarding device blocks: 0/522240 done 00:10:51.054 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:51.054 Filesystem UUID: d34fcb1a-c21e-4744-8bef-8bf1dc6496fc 00:10:51.054 Superblock backups stored on blocks: 00:10:51.054 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:51.054 00:10:51.054 Allocating group tables: 0/64 done 00:10:51.054 Writing inode tables: 0/64 done 00:10:51.054 Creating journal (8192 blocks): done 00:10:51.054 Writing superblocks and filesystem accounting information: 0/64 done 00:10:51.054 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:51.054 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:51.055 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:51.055 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2266743 00:10:51.055 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:51.055 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:51.055 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:51.055 17:58:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:51.055 00:10:51.055 real 0m0.199s 00:10:51.055 user 0m0.033s 00:10:51.055 sys 0m0.073s 00:10:51.055 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.055 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:51.055 ************************************ 00:10:51.055 END TEST filesystem_in_capsule_ext4 00:10:51.055 ************************************ 00:10:51.312 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:51.312 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:51.312 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.312 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.312 ************************************ 00:10:51.312 START TEST filesystem_in_capsule_btrfs 00:10:51.312 ************************************ 00:10:51.312 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:51.312 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:51.312 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:51.312 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:51.312 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:51.312 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:51.312 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:51.312 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:51.312 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:51.312 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:51.313 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:51.313 btrfs-progs v6.8.1 00:10:51.313 See https://btrfs.readthedocs.io for more information. 00:10:51.313 00:10:51.313 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:51.313 NOTE: several default settings have changed in version 5.15, please make sure 00:10:51.313 this does not affect your deployments: 00:10:51.313 - DUP for metadata (-m dup) 00:10:51.313 - enabled no-holes (-O no-holes) 00:10:51.313 - enabled free-space-tree (-R free-space-tree) 00:10:51.313 00:10:51.313 Label: (null) 00:10:51.313 UUID: 9694d891-01d5-4cb5-9cd8-5b4e3565d8c1 00:10:51.313 Node size: 16384 00:10:51.313 Sector size: 4096 (CPU page size: 4096) 00:10:51.313 Filesystem size: 510.00MiB 00:10:51.313 Block group profiles: 00:10:51.313 Data: single 8.00MiB 00:10:51.313 Metadata: DUP 32.00MiB 00:10:51.313 System: DUP 8.00MiB 00:10:51.313 SSD detected: yes 00:10:51.313 Zoned device: no 00:10:51.313 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:51.313 Checksum: crc32c 00:10:51.313 Number of devices: 1 00:10:51.313 Devices: 00:10:51.313 ID SIZE PATH 00:10:51.313 1 510.00MiB /dev/nvme0n1p1 00:10:51.313 00:10:51.313 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:51.313 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:51.313 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:51.313 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:51.313 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:51.313 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:51.313 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:51.313 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2266743 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:51.571 00:10:51.571 real 0m0.253s 00:10:51.571 user 0m0.031s 00:10:51.571 sys 0m0.121s 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:51.571 ************************************ 00:10:51.571 END TEST filesystem_in_capsule_btrfs 00:10:51.571 ************************************ 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.571 ************************************ 00:10:51.571 START TEST filesystem_in_capsule_xfs 00:10:51.571 ************************************ 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:51.571 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:51.571 = sectsz=512 attr=2, projid32bit=1 00:10:51.571 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:51.571 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:51.571 data = bsize=4096 blocks=130560, imaxpct=25 00:10:51.571 = sunit=0 swidth=0 blks 00:10:51.571 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:51.571 log =internal log bsize=4096 blocks=16384, version=2 00:10:51.571 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:51.571 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:51.571 Discarding blocks...Done. 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:51.571 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:51.829 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:51.829 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:51.829 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:51.829 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:51.829 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:51.829 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:51.829 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2266743 00:10:51.829 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:51.829 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:51.829 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:51.829 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:51.829 00:10:51.829 real 0m0.209s 00:10:51.829 user 0m0.027s 00:10:51.829 sys 0m0.078s 00:10:51.829 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.829 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:51.829 ************************************ 00:10:51.829 END TEST filesystem_in_capsule_xfs 00:10:51.830 ************************************ 00:10:51.830 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:51.830 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:51.830 17:58:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:52.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.761 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:52.761 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:52.761 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:52.761 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.761 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:52.761 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.761 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:52.761 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.761 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.761 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.018 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.018 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:53.018 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2266743 00:10:53.018 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2266743 ']' 00:10:53.018 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2266743 00:10:53.018 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:53.018 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.018 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2266743 00:10:53.018 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.018 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.018 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2266743' 00:10:53.018 killing process with pid 2266743 00:10:53.018 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2266743 00:10:53.018 17:59:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2266743 00:10:53.277 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:53.277 00:10:53.277 real 0m8.015s 00:10:53.277 user 0m31.343s 00:10:53.277 sys 0m1.251s 00:10:53.277 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.277 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.277 ************************************ 00:10:53.277 END TEST nvmf_filesystem_in_capsule 00:10:53.277 ************************************ 00:10:53.277 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:53.277 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:53.277 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:53.277 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:53.277 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:53.277 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:53.277 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:53.277 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:53.277 rmmod nvme_rdma 00:10:53.535 rmmod nvme_fabrics 00:10:53.536 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:53.536 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:53.536 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:53.536 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:53.536 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:53.536 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:53.536 00:10:53.536 real 0m23.990s 00:10:53.536 user 1m4.923s 00:10:53.536 sys 0m8.343s 00:10:53.536 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.536 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:53.536 ************************************ 00:10:53.536 END TEST nvmf_filesystem 00:10:53.536 ************************************ 00:10:53.536 17:59:01 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:10:53.536 17:59:01 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:53.536 17:59:01 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.536 17:59:01 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:53.536 ************************************ 00:10:53.536 START TEST nvmf_target_discovery 00:10:53.536 ************************************ 00:10:53.536 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:10:53.536 * Looking for test storage... 00:10:53.536 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:53.536 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:53.536 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:10:53.536 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:53.795 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:53.795 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.795 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.795 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.795 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.795 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.795 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.795 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.795 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.795 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.795 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.795 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.795 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:53.795 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:53.795 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.795 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.795 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:53.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.796 --rc genhtml_branch_coverage=1 00:10:53.796 --rc genhtml_function_coverage=1 00:10:53.796 --rc genhtml_legend=1 00:10:53.796 --rc geninfo_all_blocks=1 00:10:53.796 --rc geninfo_unexecuted_blocks=1 00:10:53.796 00:10:53.796 ' 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:53.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.796 --rc genhtml_branch_coverage=1 00:10:53.796 --rc genhtml_function_coverage=1 00:10:53.796 --rc genhtml_legend=1 00:10:53.796 --rc geninfo_all_blocks=1 00:10:53.796 --rc geninfo_unexecuted_blocks=1 00:10:53.796 00:10:53.796 ' 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:53.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.796 --rc genhtml_branch_coverage=1 00:10:53.796 --rc genhtml_function_coverage=1 00:10:53.796 --rc genhtml_legend=1 00:10:53.796 --rc geninfo_all_blocks=1 00:10:53.796 --rc geninfo_unexecuted_blocks=1 00:10:53.796 00:10:53.796 ' 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:53.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.796 --rc genhtml_branch_coverage=1 00:10:53.796 --rc genhtml_function_coverage=1 00:10:53.796 --rc genhtml_legend=1 00:10:53.796 --rc geninfo_all_blocks=1 00:10:53.796 --rc geninfo_unexecuted_blocks=1 00:10:53.796 00:10:53.796 ' 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:53.796 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:53.796 17:59:01 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.922 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:01.923 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:01.923 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:01.923 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:01.923 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:01.923 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:01.923 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:01.923 altname enp217s0f0np0 00:11:01.923 altname ens818f0np0 00:11:01.923 inet 192.168.100.8/24 scope global mlx_0_0 00:11:01.923 valid_lft forever preferred_lft forever 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:01.923 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:01.923 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:01.923 altname enp217s0f1np1 00:11:01.923 altname ens818f1np1 00:11:01.923 inet 192.168.100.9/24 scope global mlx_0_1 00:11:01.923 valid_lft forever preferred_lft forever 00:11:01.923 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:01.924 192.168.100.9' 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:01.924 192.168.100.9' 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:01.924 192.168.100.9' 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2271709 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2271709 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2271709 ']' 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.924 17:59:08 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.924 [2024-12-09 17:59:08.929591] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:11:01.924 [2024-12-09 17:59:08.929654] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.924 [2024-12-09 17:59:09.023078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.924 [2024-12-09 17:59:09.061677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.924 [2024-12-09 17:59:09.061718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.924 [2024-12-09 17:59:09.061728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.924 [2024-12-09 17:59:09.061736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.924 [2024-12-09 17:59:09.061742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.924 [2024-12-09 17:59:09.063342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.924 [2024-12-09 17:59:09.063483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.924 [2024-12-09 17:59:09.063612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.924 [2024-12-09 17:59:09.063613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.924 17:59:09 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.924 17:59:09 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:01.924 17:59:09 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.924 17:59:09 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.924 17:59:09 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.924 17:59:09 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.924 17:59:09 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:01.924 17:59:09 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.924 17:59:09 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.924 [2024-12-09 17:59:09.853650] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x920980/0x924e70) succeed. 00:11:01.924 [2024-12-09 17:59:09.862935] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x922010/0x966510) succeed. 00:11:02.182 17:59:09 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.182 17:59:09 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:02.182 17:59:09 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:02.182 17:59:09 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:02.182 17:59:09 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.182 Null1 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.182 [2024-12-09 17:59:10.043351] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.182 Null2 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:02.182 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.183 Null3 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.183 Null4 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.183 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.441 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.441 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:11:02.441 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.441 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.441 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.441 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:11:02.441 00:11:02.441 Discovery Log Number of Records 6, Generation counter 6 00:11:02.441 =====Discovery Log Entry 0====== 00:11:02.441 trtype: rdma 00:11:02.441 adrfam: ipv4 00:11:02.441 subtype: current discovery subsystem 00:11:02.441 treq: not required 00:11:02.441 portid: 0 00:11:02.441 trsvcid: 4420 00:11:02.441 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:02.441 traddr: 192.168.100.8 00:11:02.441 eflags: explicit discovery connections, duplicate discovery information 00:11:02.441 rdma_prtype: not specified 00:11:02.441 rdma_qptype: connected 00:11:02.441 rdma_cms: rdma-cm 00:11:02.441 rdma_pkey: 0x0000 00:11:02.441 =====Discovery Log Entry 1====== 00:11:02.441 trtype: rdma 00:11:02.441 adrfam: ipv4 00:11:02.441 subtype: nvme subsystem 00:11:02.441 treq: not required 00:11:02.441 portid: 0 00:11:02.441 trsvcid: 4420 00:11:02.441 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:02.441 traddr: 192.168.100.8 00:11:02.441 eflags: none 00:11:02.441 rdma_prtype: not specified 00:11:02.441 rdma_qptype: connected 00:11:02.441 rdma_cms: rdma-cm 00:11:02.441 rdma_pkey: 0x0000 00:11:02.441 =====Discovery Log Entry 2====== 00:11:02.441 trtype: rdma 00:11:02.441 adrfam: ipv4 00:11:02.441 subtype: nvme subsystem 00:11:02.441 treq: not required 00:11:02.441 portid: 0 00:11:02.441 trsvcid: 4420 00:11:02.441 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:02.441 traddr: 192.168.100.8 00:11:02.441 eflags: none 00:11:02.441 rdma_prtype: not specified 00:11:02.441 rdma_qptype: connected 00:11:02.441 rdma_cms: rdma-cm 00:11:02.441 rdma_pkey: 0x0000 00:11:02.441 =====Discovery Log Entry 3====== 00:11:02.441 trtype: rdma 00:11:02.441 adrfam: ipv4 00:11:02.441 subtype: nvme subsystem 00:11:02.441 treq: not required 00:11:02.441 portid: 0 00:11:02.441 trsvcid: 4420 00:11:02.441 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:02.441 traddr: 192.168.100.8 00:11:02.441 eflags: none 00:11:02.441 rdma_prtype: not specified 00:11:02.441 rdma_qptype: connected 00:11:02.441 rdma_cms: rdma-cm 00:11:02.441 rdma_pkey: 0x0000 00:11:02.441 =====Discovery Log Entry 4====== 00:11:02.441 trtype: rdma 00:11:02.441 adrfam: ipv4 00:11:02.441 subtype: nvme subsystem 00:11:02.441 treq: not required 00:11:02.441 portid: 0 00:11:02.441 trsvcid: 4420 00:11:02.441 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:02.441 traddr: 192.168.100.8 00:11:02.441 eflags: none 00:11:02.441 rdma_prtype: not specified 00:11:02.441 rdma_qptype: connected 00:11:02.441 rdma_cms: rdma-cm 00:11:02.441 rdma_pkey: 0x0000 00:11:02.441 =====Discovery Log Entry 5====== 00:11:02.441 trtype: rdma 00:11:02.441 adrfam: ipv4 00:11:02.441 subtype: discovery subsystem referral 00:11:02.441 treq: not required 00:11:02.441 portid: 0 00:11:02.441 trsvcid: 4430 00:11:02.441 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:02.441 traddr: 192.168.100.8 00:11:02.441 eflags: none 00:11:02.441 rdma_prtype: unrecognized 00:11:02.441 rdma_qptype: unrecognized 00:11:02.441 rdma_cms: unrecognized 00:11:02.441 rdma_pkey: 0x0000 00:11:02.441 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:02.441 Perform nvmf subsystem discovery via RPC 00:11:02.441 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:02.441 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.441 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.441 [ 00:11:02.441 { 00:11:02.441 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:02.441 "subtype": "Discovery", 00:11:02.441 "listen_addresses": [ 00:11:02.441 { 00:11:02.441 "trtype": "RDMA", 00:11:02.441 "adrfam": "IPv4", 00:11:02.441 "traddr": "192.168.100.8", 00:11:02.441 "trsvcid": "4420" 00:11:02.441 } 00:11:02.441 ], 00:11:02.441 "allow_any_host": true, 00:11:02.441 "hosts": [] 00:11:02.441 }, 00:11:02.441 { 00:11:02.441 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:02.441 "subtype": "NVMe", 00:11:02.441 "listen_addresses": [ 00:11:02.441 { 00:11:02.441 "trtype": "RDMA", 00:11:02.441 "adrfam": "IPv4", 00:11:02.441 "traddr": "192.168.100.8", 00:11:02.441 "trsvcid": "4420" 00:11:02.441 } 00:11:02.441 ], 00:11:02.441 "allow_any_host": true, 00:11:02.441 "hosts": [], 00:11:02.441 "serial_number": "SPDK00000000000001", 00:11:02.441 "model_number": "SPDK bdev Controller", 00:11:02.441 "max_namespaces": 32, 00:11:02.441 "min_cntlid": 1, 00:11:02.441 "max_cntlid": 65519, 00:11:02.441 "namespaces": [ 00:11:02.441 { 00:11:02.441 "nsid": 1, 00:11:02.441 "bdev_name": "Null1", 00:11:02.441 "name": "Null1", 00:11:02.442 "nguid": "0ACCB22F771B41D1BEFA9AC8F498C0FD", 00:11:02.442 "uuid": "0accb22f-771b-41d1-befa-9ac8f498c0fd" 00:11:02.442 } 00:11:02.442 ] 00:11:02.442 }, 00:11:02.442 { 00:11:02.442 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:02.442 "subtype": "NVMe", 00:11:02.442 "listen_addresses": [ 00:11:02.442 { 00:11:02.442 "trtype": "RDMA", 00:11:02.442 "adrfam": "IPv4", 00:11:02.442 "traddr": "192.168.100.8", 00:11:02.442 "trsvcid": "4420" 00:11:02.442 } 00:11:02.442 ], 00:11:02.442 "allow_any_host": true, 00:11:02.442 "hosts": [], 00:11:02.442 "serial_number": "SPDK00000000000002", 00:11:02.442 "model_number": "SPDK bdev Controller", 00:11:02.442 "max_namespaces": 32, 00:11:02.442 "min_cntlid": 1, 00:11:02.442 "max_cntlid": 65519, 00:11:02.442 "namespaces": [ 00:11:02.442 { 00:11:02.442 "nsid": 1, 00:11:02.442 "bdev_name": "Null2", 00:11:02.442 "name": "Null2", 00:11:02.442 "nguid": "54DE65C049404D928EDEED79FFCE29B1", 00:11:02.442 "uuid": "54de65c0-4940-4d92-8ede-ed79ffce29b1" 00:11:02.442 } 00:11:02.442 ] 00:11:02.442 }, 00:11:02.442 { 00:11:02.442 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:02.442 "subtype": "NVMe", 00:11:02.442 "listen_addresses": [ 00:11:02.442 { 00:11:02.442 "trtype": "RDMA", 00:11:02.442 "adrfam": "IPv4", 00:11:02.442 "traddr": "192.168.100.8", 00:11:02.442 "trsvcid": "4420" 00:11:02.442 } 00:11:02.442 ], 00:11:02.442 "allow_any_host": true, 00:11:02.442 "hosts": [], 00:11:02.442 "serial_number": "SPDK00000000000003", 00:11:02.442 "model_number": "SPDK bdev Controller", 00:11:02.442 "max_namespaces": 32, 00:11:02.442 "min_cntlid": 1, 00:11:02.442 "max_cntlid": 65519, 00:11:02.442 "namespaces": [ 00:11:02.442 { 00:11:02.442 "nsid": 1, 00:11:02.442 "bdev_name": "Null3", 00:11:02.442 "name": "Null3", 00:11:02.442 "nguid": "9E328159ABDE42D5ABEF13DF8558A2CB", 00:11:02.442 "uuid": "9e328159-abde-42d5-abef-13df8558a2cb" 00:11:02.442 } 00:11:02.442 ] 00:11:02.442 }, 00:11:02.442 { 00:11:02.442 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:02.442 "subtype": "NVMe", 00:11:02.442 "listen_addresses": [ 00:11:02.442 { 00:11:02.442 "trtype": "RDMA", 00:11:02.442 "adrfam": "IPv4", 00:11:02.442 "traddr": "192.168.100.8", 00:11:02.442 "trsvcid": "4420" 00:11:02.442 } 00:11:02.442 ], 00:11:02.442 "allow_any_host": true, 00:11:02.442 "hosts": [], 00:11:02.442 "serial_number": "SPDK00000000000004", 00:11:02.442 "model_number": "SPDK bdev Controller", 00:11:02.442 "max_namespaces": 32, 00:11:02.442 "min_cntlid": 1, 00:11:02.442 "max_cntlid": 65519, 00:11:02.442 "namespaces": [ 00:11:02.442 { 00:11:02.442 "nsid": 1, 00:11:02.442 "bdev_name": "Null4", 00:11:02.442 "name": "Null4", 00:11:02.442 "nguid": "B6F0F235D9044638B85637EBAA1888BB", 00:11:02.442 "uuid": "b6f0f235-d904-4638-b856-37ebaa1888bb" 00:11:02.442 } 00:11:02.442 ] 00:11:02.442 } 00:11:02.442 ] 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.442 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:02.700 rmmod nvme_rdma 00:11:02.700 rmmod nvme_fabrics 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2271709 ']' 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2271709 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2271709 ']' 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2271709 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2271709 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2271709' 00:11:02.700 killing process with pid 2271709 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2271709 00:11:02.700 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2271709 00:11:02.960 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:02.960 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:02.960 00:11:02.960 real 0m9.441s 00:11:02.960 user 0m9.150s 00:11:02.960 sys 0m6.131s 00:11:02.960 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.960 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.960 ************************************ 00:11:02.960 END TEST nvmf_target_discovery 00:11:02.960 ************************************ 00:11:02.960 17:59:10 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:11:02.960 17:59:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.960 17:59:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.960 17:59:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:02.960 ************************************ 00:11:02.960 START TEST nvmf_referrals 00:11:02.960 ************************************ 00:11:02.960 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:11:03.220 * Looking for test storage... 00:11:03.220 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:03.220 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:03.220 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:03.220 17:59:10 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:03.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.220 --rc genhtml_branch_coverage=1 00:11:03.220 --rc genhtml_function_coverage=1 00:11:03.220 --rc genhtml_legend=1 00:11:03.220 --rc geninfo_all_blocks=1 00:11:03.220 --rc geninfo_unexecuted_blocks=1 00:11:03.220 00:11:03.220 ' 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:03.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.220 --rc genhtml_branch_coverage=1 00:11:03.220 --rc genhtml_function_coverage=1 00:11:03.220 --rc genhtml_legend=1 00:11:03.220 --rc geninfo_all_blocks=1 00:11:03.220 --rc geninfo_unexecuted_blocks=1 00:11:03.220 00:11:03.220 ' 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:03.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.220 --rc genhtml_branch_coverage=1 00:11:03.220 --rc genhtml_function_coverage=1 00:11:03.220 --rc genhtml_legend=1 00:11:03.220 --rc geninfo_all_blocks=1 00:11:03.220 --rc geninfo_unexecuted_blocks=1 00:11:03.220 00:11:03.220 ' 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:03.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.220 --rc genhtml_branch_coverage=1 00:11:03.220 --rc genhtml_function_coverage=1 00:11:03.220 --rc genhtml_legend=1 00:11:03.220 --rc geninfo_all_blocks=1 00:11:03.220 --rc geninfo_unexecuted_blocks=1 00:11:03.220 00:11:03.220 ' 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.220 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:03.221 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:03.221 17:59:11 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:11.343 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:11.343 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:11.344 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:11.344 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:11.344 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:11.344 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:11.344 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:11.344 altname enp217s0f0np0 00:11:11.344 altname ens818f0np0 00:11:11.344 inet 192.168.100.8/24 scope global mlx_0_0 00:11:11.344 valid_lft forever preferred_lft forever 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:11.344 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:11.344 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:11.344 altname enp217s0f1np1 00:11:11.344 altname ens818f1np1 00:11:11.344 inet 192.168.100.9/24 scope global mlx_0_1 00:11:11.344 valid_lft forever preferred_lft forever 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:11.344 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:11.345 192.168.100.9' 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:11.345 192.168.100.9' 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:11.345 192.168.100.9' 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2275437 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2275437 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2275437 ']' 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.345 17:59:18 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.345 [2024-12-09 17:59:18.371978] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:11:11.345 [2024-12-09 17:59:18.372036] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.345 [2024-12-09 17:59:18.463059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.345 [2024-12-09 17:59:18.501853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.345 [2024-12-09 17:59:18.501895] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.345 [2024-12-09 17:59:18.501904] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.345 [2024-12-09 17:59:18.501912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.345 [2024-12-09 17:59:18.501935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.345 [2024-12-09 17:59:18.503757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.345 [2024-12-09 17:59:18.503868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.345 [2024-12-09 17:59:18.503989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.345 [2024-12-09 17:59:18.503990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.345 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.345 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:11.345 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:11.345 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:11.345 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.345 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.345 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:11.345 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.345 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.345 [2024-12-09 17:59:19.294388] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15af980/0x15b3e70) succeed. 00:11:11.345 [2024-12-09 17:59:19.303495] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15b1010/0x15f5510) succeed. 00:11:11.602 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.602 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:11:11.602 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.602 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.602 [2024-12-09 17:59:19.444546] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:11:11.602 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.602 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:11:11.602 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.602 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.602 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.602 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:11:11.602 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.602 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.602 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.602 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:11:11.602 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:11.603 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:11.860 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:12.117 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:12.117 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:12.117 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:11:12.117 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.117 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:12.117 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.117 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:12.117 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.117 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:12.117 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.118 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:12.118 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:12.118 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:12.118 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:12.118 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.118 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:12.118 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:12.118 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.118 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:12.118 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:12.118 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:12.118 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:12.118 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:12.118 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:12.118 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:12.118 17:59:19 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:12.118 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:12.118 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:12.118 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:12.118 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:12.118 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:12.118 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:12.118 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:12.375 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:12.375 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:12.375 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:12.375 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:12.375 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:12.375 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:12.375 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:12.375 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:12.375 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.375 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:12.375 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.375 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:12.375 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:12.375 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:12.375 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:12.375 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.375 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:12.375 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:12.632 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.632 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:12.632 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:12.632 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:12.632 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:12.632 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:12.632 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:12.632 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:12.632 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:12.632 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:12.632 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:12.632 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:12.632 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:12.632 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:12.632 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:12.632 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:12.889 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:13.147 rmmod nvme_rdma 00:11:13.147 rmmod nvme_fabrics 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2275437 ']' 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2275437 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2275437 ']' 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2275437 00:11:13.147 17:59:20 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:13.147 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.147 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2275437 00:11:13.147 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:13.147 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:13.147 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2275437' 00:11:13.147 killing process with pid 2275437 00:11:13.147 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2275437 00:11:13.147 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2275437 00:11:13.406 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:13.406 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:13.406 00:11:13.406 real 0m10.429s 00:11:13.406 user 0m14.091s 00:11:13.406 sys 0m6.472s 00:11:13.406 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.406 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.406 ************************************ 00:11:13.406 END TEST nvmf_referrals 00:11:13.406 ************************************ 00:11:13.406 17:59:21 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:13.406 17:59:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:13.406 17:59:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.406 17:59:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:13.666 ************************************ 00:11:13.666 START TEST nvmf_connect_disconnect 00:11:13.666 ************************************ 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:13.666 * Looking for test storage... 00:11:13.666 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.666 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:13.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.667 --rc genhtml_branch_coverage=1 00:11:13.667 --rc genhtml_function_coverage=1 00:11:13.667 --rc genhtml_legend=1 00:11:13.667 --rc geninfo_all_blocks=1 00:11:13.667 --rc geninfo_unexecuted_blocks=1 00:11:13.667 00:11:13.667 ' 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:13.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.667 --rc genhtml_branch_coverage=1 00:11:13.667 --rc genhtml_function_coverage=1 00:11:13.667 --rc genhtml_legend=1 00:11:13.667 --rc geninfo_all_blocks=1 00:11:13.667 --rc geninfo_unexecuted_blocks=1 00:11:13.667 00:11:13.667 ' 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:13.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.667 --rc genhtml_branch_coverage=1 00:11:13.667 --rc genhtml_function_coverage=1 00:11:13.667 --rc genhtml_legend=1 00:11:13.667 --rc geninfo_all_blocks=1 00:11:13.667 --rc geninfo_unexecuted_blocks=1 00:11:13.667 00:11:13.667 ' 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:13.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.667 --rc genhtml_branch_coverage=1 00:11:13.667 --rc genhtml_function_coverage=1 00:11:13.667 --rc genhtml_legend=1 00:11:13.667 --rc geninfo_all_blocks=1 00:11:13.667 --rc geninfo_unexecuted_blocks=1 00:11:13.667 00:11:13.667 ' 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:13.667 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.667 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.927 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:13.927 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:13.927 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:13.927 17:59:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:22.090 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:22.090 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:22.091 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:22.091 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:22.091 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:22.091 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:22.091 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:22.091 altname enp217s0f0np0 00:11:22.091 altname ens818f0np0 00:11:22.091 inet 192.168.100.8/24 scope global mlx_0_0 00:11:22.091 valid_lft forever preferred_lft forever 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:22.091 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:22.091 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:22.091 altname enp217s0f1np1 00:11:22.091 altname ens818f1np1 00:11:22.091 inet 192.168.100.9/24 scope global mlx_0_1 00:11:22.091 valid_lft forever preferred_lft forever 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:22.091 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:22.092 192.168.100.9' 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:22.092 192.168.100.9' 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:22.092 192.168.100.9' 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2279468 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2279468 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2279468 ']' 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.092 17:59:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:22.092 [2024-12-09 17:59:28.934775] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:11:22.092 [2024-12-09 17:59:28.934829] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.092 [2024-12-09 17:59:29.024482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.092 [2024-12-09 17:59:29.062534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.092 [2024-12-09 17:59:29.062573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.092 [2024-12-09 17:59:29.062582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.092 [2024-12-09 17:59:29.062590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.092 [2024-12-09 17:59:29.062596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.092 [2024-12-09 17:59:29.064404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.092 [2024-12-09 17:59:29.064445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.092 [2024-12-09 17:59:29.064552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.092 [2024-12-09 17:59:29.064554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:22.092 [2024-12-09 17:59:29.816919] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:11:22.092 [2024-12-09 17:59:29.838909] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1feb980/0x1fefe70) succeed. 00:11:22.092 [2024-12-09 17:59:29.848206] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fed010/0x2031510) succeed. 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.092 17:59:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:22.092 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.092 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:22.092 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.092 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:22.092 [2024-12-09 17:59:30.010646] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:22.092 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.092 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:22.092 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:22.092 17:59:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:26.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.064 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:42.064 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:42.064 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.064 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:42.064 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:42.064 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:42.064 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:42.064 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.064 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:42.064 rmmod nvme_rdma 00:11:42.064 rmmod nvme_fabrics 00:11:42.321 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.321 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:42.321 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:42.321 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2279468 ']' 00:11:42.321 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2279468 00:11:42.321 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2279468 ']' 00:11:42.321 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2279468 00:11:42.321 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:42.321 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.321 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2279468 00:11:42.321 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.321 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.321 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2279468' 00:11:42.321 killing process with pid 2279468 00:11:42.321 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2279468 00:11:42.321 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2279468 00:11:42.580 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.580 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:42.580 00:11:42.580 real 0m28.971s 00:11:42.580 user 1m26.898s 00:11:42.580 sys 0m6.721s 00:11:42.580 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.580 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:42.580 ************************************ 00:11:42.580 END TEST nvmf_connect_disconnect 00:11:42.580 ************************************ 00:11:42.580 17:59:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:11:42.580 17:59:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.580 17:59:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.580 17:59:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.580 ************************************ 00:11:42.580 START TEST nvmf_multitarget 00:11:42.580 ************************************ 00:11:42.580 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:11:42.841 * Looking for test storage... 00:11:42.841 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:42.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.841 --rc genhtml_branch_coverage=1 00:11:42.841 --rc genhtml_function_coverage=1 00:11:42.841 --rc genhtml_legend=1 00:11:42.841 --rc geninfo_all_blocks=1 00:11:42.841 --rc geninfo_unexecuted_blocks=1 00:11:42.841 00:11:42.841 ' 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:42.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.841 --rc genhtml_branch_coverage=1 00:11:42.841 --rc genhtml_function_coverage=1 00:11:42.841 --rc genhtml_legend=1 00:11:42.841 --rc geninfo_all_blocks=1 00:11:42.841 --rc geninfo_unexecuted_blocks=1 00:11:42.841 00:11:42.841 ' 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:42.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.841 --rc genhtml_branch_coverage=1 00:11:42.841 --rc genhtml_function_coverage=1 00:11:42.841 --rc genhtml_legend=1 00:11:42.841 --rc geninfo_all_blocks=1 00:11:42.841 --rc geninfo_unexecuted_blocks=1 00:11:42.841 00:11:42.841 ' 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:42.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.841 --rc genhtml_branch_coverage=1 00:11:42.841 --rc genhtml_function_coverage=1 00:11:42.841 --rc genhtml_legend=1 00:11:42.841 --rc geninfo_all_blocks=1 00:11:42.841 --rc geninfo_unexecuted_blocks=1 00:11:42.841 00:11:42.841 ' 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.841 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.842 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:42.842 17:59:50 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:50.966 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:50.966 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:50.966 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:50.966 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.966 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:50.967 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:50.967 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:50.967 altname enp217s0f0np0 00:11:50.967 altname ens818f0np0 00:11:50.967 inet 192.168.100.8/24 scope global mlx_0_0 00:11:50.967 valid_lft forever preferred_lft forever 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:50.967 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:50.967 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:50.967 altname enp217s0f1np1 00:11:50.967 altname ens818f1np1 00:11:50.967 inet 192.168.100.9/24 scope global mlx_0_1 00:11:50.967 valid_lft forever preferred_lft forever 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:50.967 192.168.100.9' 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:50.967 192.168.100.9' 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:50.967 192.168.100.9' 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2286506 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2286506 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2286506 ']' 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.967 17:59:57 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:50.967 [2024-12-09 17:59:58.039561] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:11:50.967 [2024-12-09 17:59:58.039611] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.967 [2024-12-09 17:59:58.129739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.967 [2024-12-09 17:59:58.170264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.967 [2024-12-09 17:59:58.170304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.967 [2024-12-09 17:59:58.170316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.967 [2024-12-09 17:59:58.170324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.967 [2024-12-09 17:59:58.170330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.967 [2024-12-09 17:59:58.172072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.967 [2024-12-09 17:59:58.172163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.967 [2024-12-09 17:59:58.172199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.967 [2024-12-09 17:59:58.172200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.967 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.967 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:50.967 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:50.967 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:50.967 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:50.967 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.967 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:50.968 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:50.968 17:59:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:51.225 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:51.225 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:51.225 "nvmf_tgt_1" 00:11:51.225 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:51.482 "nvmf_tgt_2" 00:11:51.482 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:51.482 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:51.482 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:51.482 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:51.739 true 00:11:51.739 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:51.739 true 00:11:51.739 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:51.739 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:51.739 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:51.739 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:51.739 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:51.739 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:51.739 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:51.739 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:51.739 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:51.739 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:51.739 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:51.739 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:51.739 rmmod nvme_rdma 00:11:51.739 rmmod nvme_fabrics 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2286506 ']' 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2286506 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2286506 ']' 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2286506 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2286506 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2286506' 00:11:51.998 killing process with pid 2286506 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2286506 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2286506 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:51.998 00:11:51.998 real 0m9.482s 00:11:51.998 user 0m10.042s 00:11:51.998 sys 0m6.149s 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.998 17:59:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:51.998 ************************************ 00:11:51.998 END TEST nvmf_multitarget 00:11:51.998 ************************************ 00:11:52.257 17:59:59 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:11:52.257 17:59:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:52.257 17:59:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.257 17:59:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:52.257 ************************************ 00:11:52.257 START TEST nvmf_rpc 00:11:52.257 ************************************ 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:11:52.257 * Looking for test storage... 00:11:52.257 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.257 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.258 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:52.258 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.258 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:52.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.258 --rc genhtml_branch_coverage=1 00:11:52.258 --rc genhtml_function_coverage=1 00:11:52.258 --rc genhtml_legend=1 00:11:52.258 --rc geninfo_all_blocks=1 00:11:52.258 --rc geninfo_unexecuted_blocks=1 00:11:52.258 00:11:52.258 ' 00:11:52.258 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:52.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.258 --rc genhtml_branch_coverage=1 00:11:52.258 --rc genhtml_function_coverage=1 00:11:52.258 --rc genhtml_legend=1 00:11:52.258 --rc geninfo_all_blocks=1 00:11:52.258 --rc geninfo_unexecuted_blocks=1 00:11:52.258 00:11:52.258 ' 00:11:52.258 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:52.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.258 --rc genhtml_branch_coverage=1 00:11:52.258 --rc genhtml_function_coverage=1 00:11:52.258 --rc genhtml_legend=1 00:11:52.258 --rc geninfo_all_blocks=1 00:11:52.258 --rc geninfo_unexecuted_blocks=1 00:11:52.258 00:11:52.258 ' 00:11:52.258 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:52.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.258 --rc genhtml_branch_coverage=1 00:11:52.258 --rc genhtml_function_coverage=1 00:11:52.258 --rc genhtml_legend=1 00:11:52.258 --rc geninfo_all_blocks=1 00:11:52.258 --rc geninfo_unexecuted_blocks=1 00:11:52.258 00:11:52.258 ' 00:11:52.258 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.258 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:52.258 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.258 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.258 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.258 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.258 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.258 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.258 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.258 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.258 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.258 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:52.517 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:52.517 18:00:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.640 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.640 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:00.640 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:00.640 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:00.640 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:00.640 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:00.640 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:00.640 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:00.640 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:00.640 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:00.640 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:00.640 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:00.640 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:00.640 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:00.640 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:00.640 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.640 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.640 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:00.641 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:00.641 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:00.641 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:00.641 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:00.641 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:00.641 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:00.641 altname enp217s0f0np0 00:12:00.641 altname ens818f0np0 00:12:00.641 inet 192.168.100.8/24 scope global mlx_0_0 00:12:00.641 valid_lft forever preferred_lft forever 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:00.641 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:00.641 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:00.641 altname enp217s0f1np1 00:12:00.641 altname ens818f1np1 00:12:00.641 inet 192.168.100.9/24 scope global mlx_0_1 00:12:00.641 valid_lft forever preferred_lft forever 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:00.641 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:00.642 192.168.100.9' 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:00.642 192.168.100.9' 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:00.642 192.168.100.9' 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2290784 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2290784 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2290784 ']' 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.642 18:00:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.642 [2024-12-09 18:00:07.593178] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:12:00.642 [2024-12-09 18:00:07.593226] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.642 [2024-12-09 18:00:07.682952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.642 [2024-12-09 18:00:07.723816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.642 [2024-12-09 18:00:07.723855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.642 [2024-12-09 18:00:07.723864] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.642 [2024-12-09 18:00:07.723873] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.642 [2024-12-09 18:00:07.723879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.642 [2024-12-09 18:00:07.726967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.642 [2024-12-09 18:00:07.726997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.642 [2024-12-09 18:00:07.727109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.642 [2024-12-09 18:00:07.727110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:00.642 "tick_rate": 2500000000, 00:12:00.642 "poll_groups": [ 00:12:00.642 { 00:12:00.642 "name": "nvmf_tgt_poll_group_000", 00:12:00.642 "admin_qpairs": 0, 00:12:00.642 "io_qpairs": 0, 00:12:00.642 "current_admin_qpairs": 0, 00:12:00.642 "current_io_qpairs": 0, 00:12:00.642 "pending_bdev_io": 0, 00:12:00.642 "completed_nvme_io": 0, 00:12:00.642 "transports": [] 00:12:00.642 }, 00:12:00.642 { 00:12:00.642 "name": "nvmf_tgt_poll_group_001", 00:12:00.642 "admin_qpairs": 0, 00:12:00.642 "io_qpairs": 0, 00:12:00.642 "current_admin_qpairs": 0, 00:12:00.642 "current_io_qpairs": 0, 00:12:00.642 "pending_bdev_io": 0, 00:12:00.642 "completed_nvme_io": 0, 00:12:00.642 "transports": [] 00:12:00.642 }, 00:12:00.642 { 00:12:00.642 "name": "nvmf_tgt_poll_group_002", 00:12:00.642 "admin_qpairs": 0, 00:12:00.642 "io_qpairs": 0, 00:12:00.642 "current_admin_qpairs": 0, 00:12:00.642 "current_io_qpairs": 0, 00:12:00.642 "pending_bdev_io": 0, 00:12:00.642 "completed_nvme_io": 0, 00:12:00.642 "transports": [] 00:12:00.642 }, 00:12:00.642 { 00:12:00.642 "name": "nvmf_tgt_poll_group_003", 00:12:00.642 "admin_qpairs": 0, 00:12:00.642 "io_qpairs": 0, 00:12:00.642 "current_admin_qpairs": 0, 00:12:00.642 "current_io_qpairs": 0, 00:12:00.642 "pending_bdev_io": 0, 00:12:00.642 "completed_nvme_io": 0, 00:12:00.642 "transports": [] 00:12:00.642 } 00:12:00.642 ] 00:12:00.642 }' 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.642 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.900 [2024-12-09 18:00:08.625992] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc189e0/0xc1ced0) succeed. 00:12:00.900 [2024-12-09 18:00:08.635215] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc1a070/0xc5e570) succeed. 00:12:00.900 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.901 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:00.901 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.901 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.901 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.901 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:00.901 "tick_rate": 2500000000, 00:12:00.901 "poll_groups": [ 00:12:00.901 { 00:12:00.901 "name": "nvmf_tgt_poll_group_000", 00:12:00.901 "admin_qpairs": 0, 00:12:00.901 "io_qpairs": 0, 00:12:00.901 "current_admin_qpairs": 0, 00:12:00.901 "current_io_qpairs": 0, 00:12:00.901 "pending_bdev_io": 0, 00:12:00.901 "completed_nvme_io": 0, 00:12:00.901 "transports": [ 00:12:00.901 { 00:12:00.901 "trtype": "RDMA", 00:12:00.901 "pending_data_buffer": 0, 00:12:00.901 "devices": [ 00:12:00.901 { 00:12:00.901 "name": "mlx5_0", 00:12:00.901 "polls": 15687, 00:12:00.901 "idle_polls": 15687, 00:12:00.901 "completions": 0, 00:12:00.901 "requests": 0, 00:12:00.901 "request_latency": 0, 00:12:00.901 "pending_free_request": 0, 00:12:00.901 "pending_rdma_read": 0, 00:12:00.901 "pending_rdma_write": 0, 00:12:00.901 "pending_rdma_send": 0, 00:12:00.901 "total_send_wrs": 0, 00:12:00.901 "send_doorbell_updates": 0, 00:12:00.901 "total_recv_wrs": 4096, 00:12:00.901 "recv_doorbell_updates": 1 00:12:00.901 }, 00:12:00.901 { 00:12:00.901 "name": "mlx5_1", 00:12:00.901 "polls": 15687, 00:12:00.901 "idle_polls": 15687, 00:12:00.901 "completions": 0, 00:12:00.901 "requests": 0, 00:12:00.901 "request_latency": 0, 00:12:00.901 "pending_free_request": 0, 00:12:00.901 "pending_rdma_read": 0, 00:12:00.901 "pending_rdma_write": 0, 00:12:00.901 "pending_rdma_send": 0, 00:12:00.901 "total_send_wrs": 0, 00:12:00.901 "send_doorbell_updates": 0, 00:12:00.901 "total_recv_wrs": 4096, 00:12:00.901 "recv_doorbell_updates": 1 00:12:00.901 } 00:12:00.901 ] 00:12:00.901 } 00:12:00.901 ] 00:12:00.901 }, 00:12:00.901 { 00:12:00.901 "name": "nvmf_tgt_poll_group_001", 00:12:00.901 "admin_qpairs": 0, 00:12:00.901 "io_qpairs": 0, 00:12:00.901 "current_admin_qpairs": 0, 00:12:00.901 "current_io_qpairs": 0, 00:12:00.901 "pending_bdev_io": 0, 00:12:00.901 "completed_nvme_io": 0, 00:12:00.901 "transports": [ 00:12:00.901 { 00:12:00.901 "trtype": "RDMA", 00:12:00.901 "pending_data_buffer": 0, 00:12:00.901 "devices": [ 00:12:00.901 { 00:12:00.901 "name": "mlx5_0", 00:12:00.901 "polls": 9779, 00:12:00.901 "idle_polls": 9779, 00:12:00.901 "completions": 0, 00:12:00.901 "requests": 0, 00:12:00.901 "request_latency": 0, 00:12:00.901 "pending_free_request": 0, 00:12:00.901 "pending_rdma_read": 0, 00:12:00.901 "pending_rdma_write": 0, 00:12:00.901 "pending_rdma_send": 0, 00:12:00.901 "total_send_wrs": 0, 00:12:00.901 "send_doorbell_updates": 0, 00:12:00.901 "total_recv_wrs": 4096, 00:12:00.901 "recv_doorbell_updates": 1 00:12:00.901 }, 00:12:00.901 { 00:12:00.901 "name": "mlx5_1", 00:12:00.901 "polls": 9779, 00:12:00.901 "idle_polls": 9779, 00:12:00.901 "completions": 0, 00:12:00.901 "requests": 0, 00:12:00.901 "request_latency": 0, 00:12:00.901 "pending_free_request": 0, 00:12:00.901 "pending_rdma_read": 0, 00:12:00.901 "pending_rdma_write": 0, 00:12:00.901 "pending_rdma_send": 0, 00:12:00.901 "total_send_wrs": 0, 00:12:00.901 "send_doorbell_updates": 0, 00:12:00.901 "total_recv_wrs": 4096, 00:12:00.901 "recv_doorbell_updates": 1 00:12:00.901 } 00:12:00.901 ] 00:12:00.901 } 00:12:00.901 ] 00:12:00.901 }, 00:12:00.901 { 00:12:00.901 "name": "nvmf_tgt_poll_group_002", 00:12:00.901 "admin_qpairs": 0, 00:12:00.901 "io_qpairs": 0, 00:12:00.901 "current_admin_qpairs": 0, 00:12:00.901 "current_io_qpairs": 0, 00:12:00.901 "pending_bdev_io": 0, 00:12:00.901 "completed_nvme_io": 0, 00:12:00.901 "transports": [ 00:12:00.901 { 00:12:00.901 "trtype": "RDMA", 00:12:00.901 "pending_data_buffer": 0, 00:12:00.901 "devices": [ 00:12:00.901 { 00:12:00.901 "name": "mlx5_0", 00:12:00.901 "polls": 5516, 00:12:00.901 "idle_polls": 5516, 00:12:00.901 "completions": 0, 00:12:00.901 "requests": 0, 00:12:00.901 "request_latency": 0, 00:12:00.901 "pending_free_request": 0, 00:12:00.901 "pending_rdma_read": 0, 00:12:00.901 "pending_rdma_write": 0, 00:12:00.901 "pending_rdma_send": 0, 00:12:00.901 "total_send_wrs": 0, 00:12:00.901 "send_doorbell_updates": 0, 00:12:00.901 "total_recv_wrs": 4096, 00:12:00.901 "recv_doorbell_updates": 1 00:12:00.901 }, 00:12:00.901 { 00:12:00.901 "name": "mlx5_1", 00:12:00.901 "polls": 5516, 00:12:00.901 "idle_polls": 5516, 00:12:00.901 "completions": 0, 00:12:00.901 "requests": 0, 00:12:00.901 "request_latency": 0, 00:12:00.901 "pending_free_request": 0, 00:12:00.901 "pending_rdma_read": 0, 00:12:00.901 "pending_rdma_write": 0, 00:12:00.901 "pending_rdma_send": 0, 00:12:00.901 "total_send_wrs": 0, 00:12:00.901 "send_doorbell_updates": 0, 00:12:00.901 "total_recv_wrs": 4096, 00:12:00.901 "recv_doorbell_updates": 1 00:12:00.901 } 00:12:00.901 ] 00:12:00.901 } 00:12:00.901 ] 00:12:00.901 }, 00:12:00.901 { 00:12:00.901 "name": "nvmf_tgt_poll_group_003", 00:12:00.901 "admin_qpairs": 0, 00:12:00.901 "io_qpairs": 0, 00:12:00.901 "current_admin_qpairs": 0, 00:12:00.901 "current_io_qpairs": 0, 00:12:00.901 "pending_bdev_io": 0, 00:12:00.901 "completed_nvme_io": 0, 00:12:00.901 "transports": [ 00:12:00.901 { 00:12:00.901 "trtype": "RDMA", 00:12:00.901 "pending_data_buffer": 0, 00:12:00.901 "devices": [ 00:12:00.901 { 00:12:00.901 "name": "mlx5_0", 00:12:00.901 "polls": 900, 00:12:00.901 "idle_polls": 900, 00:12:00.901 "completions": 0, 00:12:00.901 "requests": 0, 00:12:00.901 "request_latency": 0, 00:12:00.901 "pending_free_request": 0, 00:12:00.901 "pending_rdma_read": 0, 00:12:00.901 "pending_rdma_write": 0, 00:12:00.901 "pending_rdma_send": 0, 00:12:00.901 "total_send_wrs": 0, 00:12:00.901 "send_doorbell_updates": 0, 00:12:00.901 "total_recv_wrs": 4096, 00:12:00.901 "recv_doorbell_updates": 1 00:12:00.901 }, 00:12:00.901 { 00:12:00.901 "name": "mlx5_1", 00:12:00.901 "polls": 900, 00:12:00.901 "idle_polls": 900, 00:12:00.901 "completions": 0, 00:12:00.901 "requests": 0, 00:12:00.901 "request_latency": 0, 00:12:00.901 "pending_free_request": 0, 00:12:00.901 "pending_rdma_read": 0, 00:12:00.901 "pending_rdma_write": 0, 00:12:00.901 "pending_rdma_send": 0, 00:12:00.901 "total_send_wrs": 0, 00:12:00.901 "send_doorbell_updates": 0, 00:12:00.901 "total_recv_wrs": 4096, 00:12:00.901 "recv_doorbell_updates": 1 00:12:00.901 } 00:12:00.901 ] 00:12:00.901 } 00:12:00.901 ] 00:12:00.901 } 00:12:00.901 ] 00:12:00.901 }' 00:12:00.901 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:00.901 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:00.901 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:00.901 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:00.901 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:00.901 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:00.901 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:00.901 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:00.901 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:01.160 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:01.160 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:12:01.160 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:12:01.160 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:12:01.160 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:12:01.160 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:01.160 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:12:01.160 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:12:01.160 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:12:01.160 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:12:01.160 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:12:01.160 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:12:01.160 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:12:01.160 18:00:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.160 Malloc1 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.160 [2024-12-09 18:00:09.079264] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:01.160 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:12:01.160 [2024-12-09 18:00:09.125480] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:12:01.418 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:01.418 could not add new controller: failed to write to nvme-fabrics device 00:12:01.418 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:01.418 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:01.418 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:01.418 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:01.418 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:01.418 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.418 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.418 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.418 18:00:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:02.349 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:02.349 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:02.349 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.349 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:02.349 18:00:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:04.245 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:04.245 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:04.245 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.245 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:04.245 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.245 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:04.245 18:00:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:05.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.176 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:05.176 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:05.176 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:05.176 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:05.464 [2024-12-09 18:00:13.237263] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:12:05.464 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:05.464 could not add new controller: failed to write to nvme-fabrics device 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.464 18:00:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:06.432 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:06.432 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:06.432 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:06.432 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:06.432 18:00:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:08.326 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:08.326 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:08.326 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:08.326 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:08.326 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:08.326 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:08.326 18:00:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.697 [2024-12-09 18:00:17.335332] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.697 18:00:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:10.628 18:00:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.628 18:00:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:10.628 18:00:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.628 18:00:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:10.628 18:00:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:12.524 18:00:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:12.524 18:00:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:12.524 18:00:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.524 18:00:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:12.524 18:00:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.524 18:00:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:12.524 18:00:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.455 [2024-12-09 18:00:21.392779] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.455 18:00:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:14.827 18:00:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:14.827 18:00:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:14.827 18:00:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.827 18:00:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:14.827 18:00:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:16.723 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:16.723 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:16.723 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:16.723 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:16.723 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.723 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:16.723 18:00:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.658 [2024-12-09 18:00:25.432962] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:17.658 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.659 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.659 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.659 18:00:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:18.589 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.589 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:18.589 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.589 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:18.589 18:00:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:20.484 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:20.484 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:20.484 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.484 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:20.484 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.484 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:20.484 18:00:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.855 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:21.855 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:21.855 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:21.855 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.855 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.856 [2024-12-09 18:00:29.472086] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.856 18:00:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:22.787 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.787 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:22.787 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.788 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:22.788 18:00:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:24.682 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:24.682 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:24.682 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.682 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:24.682 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.682 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:24.682 18:00:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.614 [2024-12-09 18:00:33.503960] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.614 18:00:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:26.546 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:26.546 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:26.546 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:26.546 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:26.546 18:00:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:29.070 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:29.070 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:29.070 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.070 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:29.070 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.070 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:29.070 18:00:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.635 [2024-12-09 18:00:37.549623] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.635 [2024-12-09 18:00:37.597790] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.635 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.893 [2024-12-09 18:00:37.645970] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.893 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.894 [2024-12-09 18:00:37.694298] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.894 [2024-12-09 18:00:37.742268] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.894 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:29.894 "tick_rate": 2500000000, 00:12:29.894 "poll_groups": [ 00:12:29.894 { 00:12:29.894 "name": "nvmf_tgt_poll_group_000", 00:12:29.894 "admin_qpairs": 2, 00:12:29.894 "io_qpairs": 27, 00:12:29.894 "current_admin_qpairs": 0, 00:12:29.894 "current_io_qpairs": 0, 00:12:29.894 "pending_bdev_io": 0, 00:12:29.894 "completed_nvme_io": 128, 00:12:29.894 "transports": [ 00:12:29.894 { 00:12:29.894 "trtype": "RDMA", 00:12:29.894 "pending_data_buffer": 0, 00:12:29.894 "devices": [ 00:12:29.894 { 00:12:29.894 "name": "mlx5_0", 00:12:29.894 "polls": 3516963, 00:12:29.894 "idle_polls": 3516633, 00:12:29.894 "completions": 367, 00:12:29.894 "requests": 183, 00:12:29.894 "request_latency": 37175074, 00:12:29.894 "pending_free_request": 0, 00:12:29.894 "pending_rdma_read": 0, 00:12:29.894 "pending_rdma_write": 0, 00:12:29.894 "pending_rdma_send": 0, 00:12:29.894 "total_send_wrs": 310, 00:12:29.894 "send_doorbell_updates": 161, 00:12:29.894 "total_recv_wrs": 4279, 00:12:29.894 "recv_doorbell_updates": 161 00:12:29.894 }, 00:12:29.894 { 00:12:29.894 "name": "mlx5_1", 00:12:29.894 "polls": 3516963, 00:12:29.894 "idle_polls": 3516963, 00:12:29.894 "completions": 0, 00:12:29.894 "requests": 0, 00:12:29.894 "request_latency": 0, 00:12:29.894 "pending_free_request": 0, 00:12:29.894 "pending_rdma_read": 0, 00:12:29.894 "pending_rdma_write": 0, 00:12:29.894 "pending_rdma_send": 0, 00:12:29.894 "total_send_wrs": 0, 00:12:29.894 "send_doorbell_updates": 0, 00:12:29.894 "total_recv_wrs": 4096, 00:12:29.894 "recv_doorbell_updates": 1 00:12:29.894 } 00:12:29.894 ] 00:12:29.894 } 00:12:29.894 ] 00:12:29.894 }, 00:12:29.894 { 00:12:29.894 "name": "nvmf_tgt_poll_group_001", 00:12:29.894 "admin_qpairs": 2, 00:12:29.894 "io_qpairs": 26, 00:12:29.894 "current_admin_qpairs": 0, 00:12:29.894 "current_io_qpairs": 0, 00:12:29.894 "pending_bdev_io": 0, 00:12:29.894 "completed_nvme_io": 125, 00:12:29.894 "transports": [ 00:12:29.894 { 00:12:29.894 "trtype": "RDMA", 00:12:29.894 "pending_data_buffer": 0, 00:12:29.894 "devices": [ 00:12:29.894 { 00:12:29.894 "name": "mlx5_0", 00:12:29.894 "polls": 3515028, 00:12:29.894 "idle_polls": 3514711, 00:12:29.894 "completions": 356, 00:12:29.894 "requests": 178, 00:12:29.894 "request_latency": 36100266, 00:12:29.894 "pending_free_request": 0, 00:12:29.894 "pending_rdma_read": 0, 00:12:29.894 "pending_rdma_write": 0, 00:12:29.894 "pending_rdma_send": 0, 00:12:29.894 "total_send_wrs": 302, 00:12:29.894 "send_doorbell_updates": 157, 00:12:29.894 "total_recv_wrs": 4274, 00:12:29.894 "recv_doorbell_updates": 158 00:12:29.894 }, 00:12:29.894 { 00:12:29.894 "name": "mlx5_1", 00:12:29.894 "polls": 3515028, 00:12:29.894 "idle_polls": 3515028, 00:12:29.894 "completions": 0, 00:12:29.894 "requests": 0, 00:12:29.894 "request_latency": 0, 00:12:29.894 "pending_free_request": 0, 00:12:29.894 "pending_rdma_read": 0, 00:12:29.894 "pending_rdma_write": 0, 00:12:29.894 "pending_rdma_send": 0, 00:12:29.894 "total_send_wrs": 0, 00:12:29.894 "send_doorbell_updates": 0, 00:12:29.894 "total_recv_wrs": 4096, 00:12:29.894 "recv_doorbell_updates": 1 00:12:29.894 } 00:12:29.894 ] 00:12:29.894 } 00:12:29.894 ] 00:12:29.894 }, 00:12:29.894 { 00:12:29.894 "name": "nvmf_tgt_poll_group_002", 00:12:29.894 "admin_qpairs": 1, 00:12:29.894 "io_qpairs": 26, 00:12:29.894 "current_admin_qpairs": 0, 00:12:29.894 "current_io_qpairs": 0, 00:12:29.894 "pending_bdev_io": 0, 00:12:29.894 "completed_nvme_io": 126, 00:12:29.894 "transports": [ 00:12:29.894 { 00:12:29.894 "trtype": "RDMA", 00:12:29.894 "pending_data_buffer": 0, 00:12:29.894 "devices": [ 00:12:29.894 { 00:12:29.894 "name": "mlx5_0", 00:12:29.894 "polls": 3566882, 00:12:29.894 "idle_polls": 3566612, 00:12:29.894 "completions": 309, 00:12:29.894 "requests": 154, 00:12:29.894 "request_latency": 34442514, 00:12:29.894 "pending_free_request": 0, 00:12:29.894 "pending_rdma_read": 0, 00:12:29.894 "pending_rdma_write": 0, 00:12:29.895 "pending_rdma_send": 0, 00:12:29.895 "total_send_wrs": 268, 00:12:29.895 "send_doorbell_updates": 131, 00:12:29.895 "total_recv_wrs": 4250, 00:12:29.895 "recv_doorbell_updates": 131 00:12:29.895 }, 00:12:29.895 { 00:12:29.895 "name": "mlx5_1", 00:12:29.895 "polls": 3566882, 00:12:29.895 "idle_polls": 3566882, 00:12:29.895 "completions": 0, 00:12:29.895 "requests": 0, 00:12:29.895 "request_latency": 0, 00:12:29.895 "pending_free_request": 0, 00:12:29.895 "pending_rdma_read": 0, 00:12:29.895 "pending_rdma_write": 0, 00:12:29.895 "pending_rdma_send": 0, 00:12:29.895 "total_send_wrs": 0, 00:12:29.895 "send_doorbell_updates": 0, 00:12:29.895 "total_recv_wrs": 4096, 00:12:29.895 "recv_doorbell_updates": 1 00:12:29.895 } 00:12:29.895 ] 00:12:29.895 } 00:12:29.895 ] 00:12:29.895 }, 00:12:29.895 { 00:12:29.895 "name": "nvmf_tgt_poll_group_003", 00:12:29.895 "admin_qpairs": 2, 00:12:29.895 "io_qpairs": 26, 00:12:29.895 "current_admin_qpairs": 0, 00:12:29.895 "current_io_qpairs": 0, 00:12:29.895 "pending_bdev_io": 0, 00:12:29.895 "completed_nvme_io": 76, 00:12:29.895 "transports": [ 00:12:29.895 { 00:12:29.895 "trtype": "RDMA", 00:12:29.895 "pending_data_buffer": 0, 00:12:29.895 "devices": [ 00:12:29.895 { 00:12:29.895 "name": "mlx5_0", 00:12:29.895 "polls": 2821196, 00:12:29.895 "idle_polls": 2820960, 00:12:29.895 "completions": 256, 00:12:29.895 "requests": 128, 00:12:29.895 "request_latency": 23211218, 00:12:29.895 "pending_free_request": 0, 00:12:29.895 "pending_rdma_read": 0, 00:12:29.895 "pending_rdma_write": 0, 00:12:29.895 "pending_rdma_send": 0, 00:12:29.895 "total_send_wrs": 202, 00:12:29.895 "send_doorbell_updates": 116, 00:12:29.895 "total_recv_wrs": 4224, 00:12:29.895 "recv_doorbell_updates": 117 00:12:29.895 }, 00:12:29.895 { 00:12:29.895 "name": "mlx5_1", 00:12:29.895 "polls": 2821196, 00:12:29.895 "idle_polls": 2821196, 00:12:29.895 "completions": 0, 00:12:29.895 "requests": 0, 00:12:29.895 "request_latency": 0, 00:12:29.895 "pending_free_request": 0, 00:12:29.895 "pending_rdma_read": 0, 00:12:29.895 "pending_rdma_write": 0, 00:12:29.895 "pending_rdma_send": 0, 00:12:29.895 "total_send_wrs": 0, 00:12:29.895 "send_doorbell_updates": 0, 00:12:29.895 "total_recv_wrs": 4096, 00:12:29.895 "recv_doorbell_updates": 1 00:12:29.895 } 00:12:29.895 ] 00:12:29.895 } 00:12:29.895 ] 00:12:29.895 } 00:12:29.895 ] 00:12:29.895 }' 00:12:29.895 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:29.895 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:29.895 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:29.895 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:30.152 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:30.152 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:30.152 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:30.152 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:30.152 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:30.152 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:12:30.152 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:12:30.152 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:12:30.152 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:12:30.152 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:12:30.152 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:30.152 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1288 > 0 )) 00:12:30.152 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:12:30.152 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:12:30.152 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:12:30.152 18:00:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:30.152 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 130929072 > 0 )) 00:12:30.152 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:30.152 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:30.152 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:30.152 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:30.152 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:30.152 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:30.152 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:30.152 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:30.152 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:30.152 rmmod nvme_rdma 00:12:30.152 rmmod nvme_fabrics 00:12:30.152 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:30.152 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:30.152 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:30.152 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2290784 ']' 00:12:30.152 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2290784 00:12:30.153 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2290784 ']' 00:12:30.153 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2290784 00:12:30.153 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:30.153 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.153 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2290784 00:12:30.411 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.411 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.411 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2290784' 00:12:30.411 killing process with pid 2290784 00:12:30.411 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2290784 00:12:30.411 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2290784 00:12:30.669 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:30.669 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:30.669 00:12:30.669 real 0m38.373s 00:12:30.669 user 2m4.736s 00:12:30.669 sys 0m7.327s 00:12:30.669 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.669 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.669 ************************************ 00:12:30.669 END TEST nvmf_rpc 00:12:30.669 ************************************ 00:12:30.669 18:00:38 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:12:30.669 18:00:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:30.669 18:00:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.669 18:00:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:30.669 ************************************ 00:12:30.669 START TEST nvmf_invalid 00:12:30.669 ************************************ 00:12:30.669 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:12:30.670 * Looking for test storage... 00:12:30.670 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:30.670 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:30.670 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:12:30.670 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.929 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:30.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.929 --rc genhtml_branch_coverage=1 00:12:30.930 --rc genhtml_function_coverage=1 00:12:30.930 --rc genhtml_legend=1 00:12:30.930 --rc geninfo_all_blocks=1 00:12:30.930 --rc geninfo_unexecuted_blocks=1 00:12:30.930 00:12:30.930 ' 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:30.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.930 --rc genhtml_branch_coverage=1 00:12:30.930 --rc genhtml_function_coverage=1 00:12:30.930 --rc genhtml_legend=1 00:12:30.930 --rc geninfo_all_blocks=1 00:12:30.930 --rc geninfo_unexecuted_blocks=1 00:12:30.930 00:12:30.930 ' 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:30.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.930 --rc genhtml_branch_coverage=1 00:12:30.930 --rc genhtml_function_coverage=1 00:12:30.930 --rc genhtml_legend=1 00:12:30.930 --rc geninfo_all_blocks=1 00:12:30.930 --rc geninfo_unexecuted_blocks=1 00:12:30.930 00:12:30.930 ' 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:30.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.930 --rc genhtml_branch_coverage=1 00:12:30.930 --rc genhtml_function_coverage=1 00:12:30.930 --rc genhtml_legend=1 00:12:30.930 --rc geninfo_all_blocks=1 00:12:30.930 --rc geninfo_unexecuted_blocks=1 00:12:30.930 00:12:30.930 ' 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:30.930 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:30.930 18:00:38 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:39.055 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:39.055 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:39.055 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:39.055 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:39.055 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:39.056 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:39.056 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:39.056 altname enp217s0f0np0 00:12:39.056 altname ens818f0np0 00:12:39.056 inet 192.168.100.8/24 scope global mlx_0_0 00:12:39.056 valid_lft forever preferred_lft forever 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:39.056 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:39.056 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:39.056 altname enp217s0f1np1 00:12:39.056 altname ens818f1np1 00:12:39.056 inet 192.168.100.9/24 scope global mlx_0_1 00:12:39.056 valid_lft forever preferred_lft forever 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:39.056 192.168.100.9' 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:39.056 192.168.100.9' 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:39.056 192.168.100.9' 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:39.056 18:00:45 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:39.056 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:39.056 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:39.056 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:39.056 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:39.056 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2299471 00:12:39.056 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2299471 00:12:39.056 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:39.056 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2299471 ']' 00:12:39.056 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.056 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.056 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.056 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.056 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:39.056 [2024-12-09 18:00:46.072687] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:12:39.056 [2024-12-09 18:00:46.072743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.056 [2024-12-09 18:00:46.162983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.056 [2024-12-09 18:00:46.205006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.056 [2024-12-09 18:00:46.205045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.057 [2024-12-09 18:00:46.205054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.057 [2024-12-09 18:00:46.205062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.057 [2024-12-09 18:00:46.205069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.057 [2024-12-09 18:00:46.206868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.057 [2024-12-09 18:00:46.207023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.057 [2024-12-09 18:00:46.207063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.057 [2024-12-09 18:00:46.207064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.057 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.057 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:39.057 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:39.057 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:39.057 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:39.057 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.057 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:39.057 18:00:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3058 00:12:39.313 [2024-12-09 18:00:47.128158] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:39.313 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:39.313 { 00:12:39.313 "nqn": "nqn.2016-06.io.spdk:cnode3058", 00:12:39.313 "tgt_name": "foobar", 00:12:39.313 "method": "nvmf_create_subsystem", 00:12:39.313 "req_id": 1 00:12:39.313 } 00:12:39.313 Got JSON-RPC error response 00:12:39.313 response: 00:12:39.313 { 00:12:39.313 "code": -32603, 00:12:39.313 "message": "Unable to find target foobar" 00:12:39.313 }' 00:12:39.313 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:39.313 { 00:12:39.313 "nqn": "nqn.2016-06.io.spdk:cnode3058", 00:12:39.313 "tgt_name": "foobar", 00:12:39.313 "method": "nvmf_create_subsystem", 00:12:39.313 "req_id": 1 00:12:39.313 } 00:12:39.313 Got JSON-RPC error response 00:12:39.313 response: 00:12:39.313 { 00:12:39.313 "code": -32603, 00:12:39.313 "message": "Unable to find target foobar" 00:12:39.313 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:39.313 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:39.313 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21288 00:12:39.570 [2024-12-09 18:00:47.328891] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21288: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:39.570 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:39.570 { 00:12:39.570 "nqn": "nqn.2016-06.io.spdk:cnode21288", 00:12:39.570 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:39.570 "method": "nvmf_create_subsystem", 00:12:39.570 "req_id": 1 00:12:39.570 } 00:12:39.570 Got JSON-RPC error response 00:12:39.570 response: 00:12:39.570 { 00:12:39.570 "code": -32602, 00:12:39.570 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:39.570 }' 00:12:39.570 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:39.570 { 00:12:39.570 "nqn": "nqn.2016-06.io.spdk:cnode21288", 00:12:39.570 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:39.570 "method": "nvmf_create_subsystem", 00:12:39.570 "req_id": 1 00:12:39.570 } 00:12:39.570 Got JSON-RPC error response 00:12:39.570 response: 00:12:39.570 { 00:12:39.570 "code": -32602, 00:12:39.570 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:39.570 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:39.570 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:39.570 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25981 00:12:39.570 [2024-12-09 18:00:47.533511] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25981: invalid model number 'SPDK_Controller' 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:39.828 { 00:12:39.828 "nqn": "nqn.2016-06.io.spdk:cnode25981", 00:12:39.828 "model_number": "SPDK_Controller\u001f", 00:12:39.828 "method": "nvmf_create_subsystem", 00:12:39.828 "req_id": 1 00:12:39.828 } 00:12:39.828 Got JSON-RPC error response 00:12:39.828 response: 00:12:39.828 { 00:12:39.828 "code": -32602, 00:12:39.828 "message": "Invalid MN SPDK_Controller\u001f" 00:12:39.828 }' 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:39.828 { 00:12:39.828 "nqn": "nqn.2016-06.io.spdk:cnode25981", 00:12:39.828 "model_number": "SPDK_Controller\u001f", 00:12:39.828 "method": "nvmf_create_subsystem", 00:12:39.828 "req_id": 1 00:12:39.828 } 00:12:39.828 Got JSON-RPC error response 00:12:39.828 response: 00:12:39.828 { 00:12:39.828 "code": -32602, 00:12:39.828 "message": "Invalid MN SPDK_Controller\u001f" 00:12:39.828 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.828 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:39.829 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.830 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.830 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:39.830 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:39.830 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:39.830 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.830 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.830 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:39.830 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:39.830 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:39.830 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.830 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.830 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ } == \- ]] 00:12:39.830 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '}aUcL?N'\''V+wOJVHLF+LrU' 00:12:39.830 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '}aUcL?N'\''V+wOJVHLF+LrU' nqn.2016-06.io.spdk:cnode20758 00:12:40.130 [2024-12-09 18:00:47.914760] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20758: invalid serial number '}aUcL?N'V+wOJVHLF+LrU' 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:40.130 { 00:12:40.130 "nqn": "nqn.2016-06.io.spdk:cnode20758", 00:12:40.130 "serial_number": "}aUcL?N'\''V+wOJVHLF+LrU", 00:12:40.130 "method": "nvmf_create_subsystem", 00:12:40.130 "req_id": 1 00:12:40.130 } 00:12:40.130 Got JSON-RPC error response 00:12:40.130 response: 00:12:40.130 { 00:12:40.130 "code": -32602, 00:12:40.130 "message": "Invalid SN }aUcL?N'\''V+wOJVHLF+LrU" 00:12:40.130 }' 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:40.130 { 00:12:40.130 "nqn": "nqn.2016-06.io.spdk:cnode20758", 00:12:40.130 "serial_number": "}aUcL?N'V+wOJVHLF+LrU", 00:12:40.130 "method": "nvmf_create_subsystem", 00:12:40.130 "req_id": 1 00:12:40.130 } 00:12:40.130 Got JSON-RPC error response 00:12:40.130 response: 00:12:40.130 { 00:12:40.130 "code": -32602, 00:12:40.130 "message": "Invalid SN }aUcL?N'V+wOJVHLF+LrU" 00:12:40.130 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.130 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.131 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:40.131 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:40.131 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:40.131 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.131 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.131 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:40.131 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:40.131 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:40.131 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.131 18:00:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:40.131 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.406 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:12:40.407 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'n)3/fF[/i*.t%khB!P#lml6P#RX{5>t$L(uT-C|%#' 00:12:40.407 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'n)3/fF[/i*.t%khB!P#lml6P#RX{5>t$L(uT-C|%#' nqn.2016-06.io.spdk:cnode26344 00:12:40.665 [2024-12-09 18:00:48.452567] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26344: invalid model number 'n)3/fF[/i*.t%khB!P#lml6P#RX{5>t$L(uT-C|%#' 00:12:40.665 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:40.665 { 00:12:40.665 "nqn": "nqn.2016-06.io.spdk:cnode26344", 00:12:40.665 "model_number": "n)3/fF[/i*.t%khB!P#lml6P#RX{5>t$L(uT-C|%#", 00:12:40.665 "method": "nvmf_create_subsystem", 00:12:40.665 "req_id": 1 00:12:40.665 } 00:12:40.665 Got JSON-RPC error response 00:12:40.665 response: 00:12:40.665 { 00:12:40.665 "code": -32602, 00:12:40.665 "message": "Invalid MN n)3/fF[/i*.t%khB!P#lml6P#RX{5>t$L(uT-C|%#" 00:12:40.665 }' 00:12:40.665 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:40.665 { 00:12:40.665 "nqn": "nqn.2016-06.io.spdk:cnode26344", 00:12:40.665 "model_number": "n)3/fF[/i*.t%khB!P#lml6P#RX{5>t$L(uT-C|%#", 00:12:40.665 "method": "nvmf_create_subsystem", 00:12:40.665 "req_id": 1 00:12:40.665 } 00:12:40.665 Got JSON-RPC error response 00:12:40.665 response: 00:12:40.665 { 00:12:40.665 "code": -32602, 00:12:40.665 "message": "Invalid MN n)3/fF[/i*.t%khB!P#lml6P#RX{5>t$L(uT-C|%#" 00:12:40.665 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:40.665 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:12:40.923 [2024-12-09 18:00:48.679434] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9042a0/0x908790) succeed. 00:12:40.923 [2024-12-09 18:00:48.688524] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x905930/0x949e30) succeed. 00:12:40.923 18:00:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:41.181 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:12:41.181 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:41.181 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:12:41.181 192.168.100.9' 00:12:41.181 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:12:41.181 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:12:41.439 [2024-12-09 18:00:49.228770] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:41.439 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:41.439 { 00:12:41.439 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:41.439 "listen_address": { 00:12:41.439 "trtype": "rdma", 00:12:41.439 "traddr": "192.168.100.8", 00:12:41.439 "trsvcid": "4421" 00:12:41.439 }, 00:12:41.439 "method": "nvmf_subsystem_remove_listener", 00:12:41.439 "req_id": 1 00:12:41.439 } 00:12:41.439 Got JSON-RPC error response 00:12:41.439 response: 00:12:41.439 { 00:12:41.439 "code": -32602, 00:12:41.439 "message": "Invalid parameters" 00:12:41.439 }' 00:12:41.439 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:41.439 { 00:12:41.439 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:41.439 "listen_address": { 00:12:41.439 "trtype": "rdma", 00:12:41.439 "traddr": "192.168.100.8", 00:12:41.439 "trsvcid": "4421" 00:12:41.439 }, 00:12:41.439 "method": "nvmf_subsystem_remove_listener", 00:12:41.439 "req_id": 1 00:12:41.439 } 00:12:41.439 Got JSON-RPC error response 00:12:41.439 response: 00:12:41.439 { 00:12:41.439 "code": -32602, 00:12:41.439 "message": "Invalid parameters" 00:12:41.439 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:41.439 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2788 -i 0 00:12:41.697 [2024-12-09 18:00:49.433508] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2788: invalid cntlid range [0-65519] 00:12:41.697 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:41.697 { 00:12:41.697 "nqn": "nqn.2016-06.io.spdk:cnode2788", 00:12:41.697 "min_cntlid": 0, 00:12:41.697 "method": "nvmf_create_subsystem", 00:12:41.697 "req_id": 1 00:12:41.697 } 00:12:41.697 Got JSON-RPC error response 00:12:41.697 response: 00:12:41.697 { 00:12:41.697 "code": -32602, 00:12:41.697 "message": "Invalid cntlid range [0-65519]" 00:12:41.697 }' 00:12:41.697 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:41.697 { 00:12:41.697 "nqn": "nqn.2016-06.io.spdk:cnode2788", 00:12:41.697 "min_cntlid": 0, 00:12:41.697 "method": "nvmf_create_subsystem", 00:12:41.697 "req_id": 1 00:12:41.697 } 00:12:41.697 Got JSON-RPC error response 00:12:41.697 response: 00:12:41.697 { 00:12:41.697 "code": -32602, 00:12:41.697 "message": "Invalid cntlid range [0-65519]" 00:12:41.697 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:41.697 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19266 -i 65520 00:12:41.697 [2024-12-09 18:00:49.646288] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19266: invalid cntlid range [65520-65519] 00:12:41.955 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:41.955 { 00:12:41.955 "nqn": "nqn.2016-06.io.spdk:cnode19266", 00:12:41.955 "min_cntlid": 65520, 00:12:41.955 "method": "nvmf_create_subsystem", 00:12:41.955 "req_id": 1 00:12:41.955 } 00:12:41.955 Got JSON-RPC error response 00:12:41.955 response: 00:12:41.955 { 00:12:41.955 "code": -32602, 00:12:41.955 "message": "Invalid cntlid range [65520-65519]" 00:12:41.955 }' 00:12:41.955 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:41.955 { 00:12:41.955 "nqn": "nqn.2016-06.io.spdk:cnode19266", 00:12:41.955 "min_cntlid": 65520, 00:12:41.955 "method": "nvmf_create_subsystem", 00:12:41.955 "req_id": 1 00:12:41.955 } 00:12:41.955 Got JSON-RPC error response 00:12:41.955 response: 00:12:41.955 { 00:12:41.955 "code": -32602, 00:12:41.955 "message": "Invalid cntlid range [65520-65519]" 00:12:41.955 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:41.955 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9846 -I 0 00:12:41.955 [2024-12-09 18:00:49.863079] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9846: invalid cntlid range [1-0] 00:12:41.955 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:41.955 { 00:12:41.955 "nqn": "nqn.2016-06.io.spdk:cnode9846", 00:12:41.955 "max_cntlid": 0, 00:12:41.955 "method": "nvmf_create_subsystem", 00:12:41.955 "req_id": 1 00:12:41.955 } 00:12:41.955 Got JSON-RPC error response 00:12:41.955 response: 00:12:41.955 { 00:12:41.955 "code": -32602, 00:12:41.955 "message": "Invalid cntlid range [1-0]" 00:12:41.955 }' 00:12:41.955 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:41.955 { 00:12:41.955 "nqn": "nqn.2016-06.io.spdk:cnode9846", 00:12:41.955 "max_cntlid": 0, 00:12:41.955 "method": "nvmf_create_subsystem", 00:12:41.955 "req_id": 1 00:12:41.955 } 00:12:41.955 Got JSON-RPC error response 00:12:41.955 response: 00:12:41.955 { 00:12:41.955 "code": -32602, 00:12:41.955 "message": "Invalid cntlid range [1-0]" 00:12:41.955 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:41.955 18:00:49 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6523 -I 65520 00:12:42.214 [2024-12-09 18:00:50.067832] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6523: invalid cntlid range [1-65520] 00:12:42.214 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:42.214 { 00:12:42.214 "nqn": "nqn.2016-06.io.spdk:cnode6523", 00:12:42.214 "max_cntlid": 65520, 00:12:42.214 "method": "nvmf_create_subsystem", 00:12:42.214 "req_id": 1 00:12:42.214 } 00:12:42.214 Got JSON-RPC error response 00:12:42.214 response: 00:12:42.214 { 00:12:42.214 "code": -32602, 00:12:42.214 "message": "Invalid cntlid range [1-65520]" 00:12:42.214 }' 00:12:42.214 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:42.214 { 00:12:42.214 "nqn": "nqn.2016-06.io.spdk:cnode6523", 00:12:42.214 "max_cntlid": 65520, 00:12:42.214 "method": "nvmf_create_subsystem", 00:12:42.214 "req_id": 1 00:12:42.214 } 00:12:42.214 Got JSON-RPC error response 00:12:42.214 response: 00:12:42.214 { 00:12:42.214 "code": -32602, 00:12:42.214 "message": "Invalid cntlid range [1-65520]" 00:12:42.214 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:42.214 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21666 -i 6 -I 5 00:12:42.472 [2024-12-09 18:00:50.264545] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21666: invalid cntlid range [6-5] 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:42.472 { 00:12:42.472 "nqn": "nqn.2016-06.io.spdk:cnode21666", 00:12:42.472 "min_cntlid": 6, 00:12:42.472 "max_cntlid": 5, 00:12:42.472 "method": "nvmf_create_subsystem", 00:12:42.472 "req_id": 1 00:12:42.472 } 00:12:42.472 Got JSON-RPC error response 00:12:42.472 response: 00:12:42.472 { 00:12:42.472 "code": -32602, 00:12:42.472 "message": "Invalid cntlid range [6-5]" 00:12:42.472 }' 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:42.472 { 00:12:42.472 "nqn": "nqn.2016-06.io.spdk:cnode21666", 00:12:42.472 "min_cntlid": 6, 00:12:42.472 "max_cntlid": 5, 00:12:42.472 "method": "nvmf_create_subsystem", 00:12:42.472 "req_id": 1 00:12:42.472 } 00:12:42.472 Got JSON-RPC error response 00:12:42.472 response: 00:12:42.472 { 00:12:42.472 "code": -32602, 00:12:42.472 "message": "Invalid cntlid range [6-5]" 00:12:42.472 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:42.472 { 00:12:42.472 "name": "foobar", 00:12:42.472 "method": "nvmf_delete_target", 00:12:42.472 "req_id": 1 00:12:42.472 } 00:12:42.472 Got JSON-RPC error response 00:12:42.472 response: 00:12:42.472 { 00:12:42.472 "code": -32602, 00:12:42.472 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:42.472 }' 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:42.472 { 00:12:42.472 "name": "foobar", 00:12:42.472 "method": "nvmf_delete_target", 00:12:42.472 "req_id": 1 00:12:42.472 } 00:12:42.472 Got JSON-RPC error response 00:12:42.472 response: 00:12:42.472 { 00:12:42.472 "code": -32602, 00:12:42.472 "message": "The specified target doesn't exist, cannot delete it." 00:12:42.472 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:42.472 rmmod nvme_rdma 00:12:42.472 rmmod nvme_fabrics 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2299471 ']' 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2299471 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2299471 ']' 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2299471 00:12:42.472 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:42.731 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.731 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2299471 00:12:42.731 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.731 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.731 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2299471' 00:12:42.731 killing process with pid 2299471 00:12:42.731 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2299471 00:12:42.731 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2299471 00:12:42.990 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:42.990 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:42.990 00:12:42.990 real 0m12.269s 00:12:42.990 user 0m22.757s 00:12:42.990 sys 0m6.757s 00:12:42.990 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.990 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:42.990 ************************************ 00:12:42.990 END TEST nvmf_invalid 00:12:42.990 ************************************ 00:12:42.990 18:00:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:12:42.990 18:00:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:42.990 18:00:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.990 18:00:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:42.990 ************************************ 00:12:42.990 START TEST nvmf_connect_stress 00:12:42.990 ************************************ 00:12:42.990 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:12:42.990 * Looking for test storage... 00:12:42.990 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:42.990 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:42.990 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:12:42.990 18:00:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:43.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.250 --rc genhtml_branch_coverage=1 00:12:43.250 --rc genhtml_function_coverage=1 00:12:43.250 --rc genhtml_legend=1 00:12:43.250 --rc geninfo_all_blocks=1 00:12:43.250 --rc geninfo_unexecuted_blocks=1 00:12:43.250 00:12:43.250 ' 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:43.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.250 --rc genhtml_branch_coverage=1 00:12:43.250 --rc genhtml_function_coverage=1 00:12:43.250 --rc genhtml_legend=1 00:12:43.250 --rc geninfo_all_blocks=1 00:12:43.250 --rc geninfo_unexecuted_blocks=1 00:12:43.250 00:12:43.250 ' 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:43.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.250 --rc genhtml_branch_coverage=1 00:12:43.250 --rc genhtml_function_coverage=1 00:12:43.250 --rc genhtml_legend=1 00:12:43.250 --rc geninfo_all_blocks=1 00:12:43.250 --rc geninfo_unexecuted_blocks=1 00:12:43.250 00:12:43.250 ' 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:43.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.250 --rc genhtml_branch_coverage=1 00:12:43.250 --rc genhtml_function_coverage=1 00:12:43.250 --rc genhtml_legend=1 00:12:43.250 --rc geninfo_all_blocks=1 00:12:43.250 --rc geninfo_unexecuted_blocks=1 00:12:43.250 00:12:43.250 ' 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.250 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:43.251 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:43.251 18:00:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:51.380 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:51.380 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:51.380 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:51.380 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:51.380 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:51.381 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:51.381 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:51.381 altname enp217s0f0np0 00:12:51.381 altname ens818f0np0 00:12:51.381 inet 192.168.100.8/24 scope global mlx_0_0 00:12:51.381 valid_lft forever preferred_lft forever 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:51.381 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:51.381 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:51.381 altname enp217s0f1np1 00:12:51.381 altname ens818f1np1 00:12:51.381 inet 192.168.100.9/24 scope global mlx_0_1 00:12:51.381 valid_lft forever preferred_lft forever 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:51.381 192.168.100.9' 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:51.381 192.168.100.9' 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:51.381 192.168.100.9' 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2303879 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2303879 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2303879 ']' 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.381 18:00:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.381 [2024-12-09 18:00:58.388197] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:12:51.381 [2024-12-09 18:00:58.388247] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.381 [2024-12-09 18:00:58.477465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:51.381 [2024-12-09 18:00:58.516705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.381 [2024-12-09 18:00:58.516744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.381 [2024-12-09 18:00:58.516754] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.381 [2024-12-09 18:00:58.516762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.381 [2024-12-09 18:00:58.516769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.381 [2024-12-09 18:00:58.518340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.381 [2024-12-09 18:00:58.518447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.381 [2024-12-09 18:00:58.518449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.381 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.381 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:51.381 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:51.382 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:51.382 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.382 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.382 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:51.382 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.382 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.382 [2024-12-09 18:00:59.311007] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd220c0/0xd265b0) succeed. 00:12:51.382 [2024-12-09 18:00:59.320012] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd236b0/0xd67c50) succeed. 00:12:51.640 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.640 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:51.640 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.640 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.640 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.640 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:51.640 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.640 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.640 [2024-12-09 18:00:59.440864] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:51.640 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.640 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:51.640 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.640 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.640 NULL1 00:12:51.640 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.640 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2304119 00:12:51.640 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:51.640 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.641 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.207 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.207 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:52.207 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.207 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.207 18:00:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.464 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.464 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:52.464 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.464 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.464 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.722 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.722 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:52.722 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.722 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.722 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.980 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.980 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:52.980 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.980 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.980 18:01:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.238 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.238 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:53.238 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.238 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.238 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.805 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.805 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:53.805 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.805 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.805 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.062 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.062 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:54.062 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.062 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.062 18:01:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.321 18:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.321 18:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:54.321 18:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.321 18:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.321 18:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.579 18:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.579 18:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:54.579 18:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.579 18:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.579 18:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.145 18:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.145 18:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:55.145 18:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.145 18:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.145 18:01:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.403 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.403 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:55.403 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.403 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.403 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.661 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.661 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:55.661 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.661 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.661 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.919 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.919 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:55.919 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.919 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.919 18:01:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.177 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.177 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:56.177 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.177 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.177 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.742 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.742 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:56.742 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.743 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.743 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.000 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.000 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:57.000 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.000 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.000 18:01:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.260 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.260 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:57.260 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.260 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.260 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.517 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.517 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:57.517 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.517 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.517 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.083 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.083 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:58.083 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.083 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.083 18:01:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.346 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.346 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:58.346 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.346 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.346 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.604 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.604 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:58.604 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.604 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.604 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.862 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.862 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:58.862 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.862 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.862 18:01:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.119 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.119 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:59.119 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.119 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.119 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.684 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.684 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:59.684 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.684 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.684 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.942 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.942 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:12:59.942 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.942 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.942 18:01:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.200 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.200 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:13:00.200 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.200 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.200 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.458 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.458 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:13:00.458 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.458 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.458 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.023 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.023 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:13:01.023 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.023 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.023 18:01:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.281 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.281 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:13:01.281 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.281 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.281 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.539 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.539 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:13:01.539 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.539 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.539 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.797 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.797 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:13:01.797 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.797 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.797 18:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.797 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:13:02.056 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.056 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2304119 00:13:02.056 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2304119) - No such process 00:13:02.056 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2304119 00:13:02.056 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:02.056 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:02.056 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:02.056 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:02.056 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:02.056 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:02.056 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:02.056 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:02.056 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:02.056 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:02.056 rmmod nvme_rdma 00:13:02.315 rmmod nvme_fabrics 00:13:02.315 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:02.315 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:02.315 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:02.315 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2303879 ']' 00:13:02.315 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2303879 00:13:02.315 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2303879 ']' 00:13:02.315 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2303879 00:13:02.315 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:02.315 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.315 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2303879 00:13:02.315 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:02.315 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:02.315 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2303879' 00:13:02.315 killing process with pid 2303879 00:13:02.315 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2303879 00:13:02.315 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2303879 00:13:02.574 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:02.574 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:02.574 00:13:02.574 real 0m19.536s 00:13:02.574 user 0m43.117s 00:13:02.574 sys 0m8.306s 00:13:02.574 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.574 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.574 ************************************ 00:13:02.574 END TEST nvmf_connect_stress 00:13:02.574 ************************************ 00:13:02.574 18:01:10 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:13:02.574 18:01:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:02.574 18:01:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.574 18:01:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:02.574 ************************************ 00:13:02.574 START TEST nvmf_fused_ordering 00:13:02.574 ************************************ 00:13:02.574 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:13:02.834 * Looking for test storage... 00:13:02.834 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:02.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.834 --rc genhtml_branch_coverage=1 00:13:02.834 --rc genhtml_function_coverage=1 00:13:02.834 --rc genhtml_legend=1 00:13:02.834 --rc geninfo_all_blocks=1 00:13:02.834 --rc geninfo_unexecuted_blocks=1 00:13:02.834 00:13:02.834 ' 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:02.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.834 --rc genhtml_branch_coverage=1 00:13:02.834 --rc genhtml_function_coverage=1 00:13:02.834 --rc genhtml_legend=1 00:13:02.834 --rc geninfo_all_blocks=1 00:13:02.834 --rc geninfo_unexecuted_blocks=1 00:13:02.834 00:13:02.834 ' 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:02.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.834 --rc genhtml_branch_coverage=1 00:13:02.834 --rc genhtml_function_coverage=1 00:13:02.834 --rc genhtml_legend=1 00:13:02.834 --rc geninfo_all_blocks=1 00:13:02.834 --rc geninfo_unexecuted_blocks=1 00:13:02.834 00:13:02.834 ' 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:02.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.834 --rc genhtml_branch_coverage=1 00:13:02.834 --rc genhtml_function_coverage=1 00:13:02.834 --rc genhtml_legend=1 00:13:02.834 --rc geninfo_all_blocks=1 00:13:02.834 --rc geninfo_unexecuted_blocks=1 00:13:02.834 00:13:02.834 ' 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:02.834 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:02.835 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:02.835 18:01:10 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:11.042 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:11.042 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:11.042 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:11.043 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:11.043 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:11.043 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:11.043 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:11.043 altname enp217s0f0np0 00:13:11.043 altname ens818f0np0 00:13:11.043 inet 192.168.100.8/24 scope global mlx_0_0 00:13:11.043 valid_lft forever preferred_lft forever 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:11.043 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:11.043 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:11.043 altname enp217s0f1np1 00:13:11.043 altname ens818f1np1 00:13:11.043 inet 192.168.100.9/24 scope global mlx_0_1 00:13:11.043 valid_lft forever preferred_lft forever 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:11.043 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:11.044 192.168.100.9' 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:11.044 192.168.100.9' 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:11.044 192.168.100.9' 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2309222 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2309222 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2309222 ']' 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.044 18:01:17 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:11.044 [2024-12-09 18:01:17.927990] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:13:11.044 [2024-12-09 18:01:17.928040] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.044 [2024-12-09 18:01:18.020303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.044 [2024-12-09 18:01:18.059565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.044 [2024-12-09 18:01:18.059603] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.044 [2024-12-09 18:01:18.059613] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.044 [2024-12-09 18:01:18.059621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.044 [2024-12-09 18:01:18.059628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.044 [2024-12-09 18:01:18.060221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:11.044 [2024-12-09 18:01:18.831013] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7c59a0/0x7c9e90) succeed. 00:13:11.044 [2024-12-09 18:01:18.839472] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7c6e50/0x80b530) succeed. 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:11.044 [2024-12-09 18:01:18.884928] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:11.044 NULL1 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.044 18:01:18 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:11.044 [2024-12-09 18:01:18.939803] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:13:11.044 [2024-12-09 18:01:18.939844] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2309501 ] 00:13:11.304 Attached to nqn.2016-06.io.spdk:cnode1 00:13:11.304 Namespace ID: 1 size: 1GB 00:13:11.304 fused_ordering(0) 00:13:11.304 fused_ordering(1) 00:13:11.304 fused_ordering(2) 00:13:11.304 fused_ordering(3) 00:13:11.304 fused_ordering(4) 00:13:11.304 fused_ordering(5) 00:13:11.304 fused_ordering(6) 00:13:11.304 fused_ordering(7) 00:13:11.304 fused_ordering(8) 00:13:11.304 fused_ordering(9) 00:13:11.304 fused_ordering(10) 00:13:11.304 fused_ordering(11) 00:13:11.304 fused_ordering(12) 00:13:11.304 fused_ordering(13) 00:13:11.304 fused_ordering(14) 00:13:11.304 fused_ordering(15) 00:13:11.304 fused_ordering(16) 00:13:11.304 fused_ordering(17) 00:13:11.304 fused_ordering(18) 00:13:11.304 fused_ordering(19) 00:13:11.304 fused_ordering(20) 00:13:11.304 fused_ordering(21) 00:13:11.304 fused_ordering(22) 00:13:11.304 fused_ordering(23) 00:13:11.304 fused_ordering(24) 00:13:11.304 fused_ordering(25) 00:13:11.304 fused_ordering(26) 00:13:11.304 fused_ordering(27) 00:13:11.304 fused_ordering(28) 00:13:11.304 fused_ordering(29) 00:13:11.304 fused_ordering(30) 00:13:11.304 fused_ordering(31) 00:13:11.304 fused_ordering(32) 00:13:11.304 fused_ordering(33) 00:13:11.304 fused_ordering(34) 00:13:11.304 fused_ordering(35) 00:13:11.304 fused_ordering(36) 00:13:11.304 fused_ordering(37) 00:13:11.304 fused_ordering(38) 00:13:11.304 fused_ordering(39) 00:13:11.304 fused_ordering(40) 00:13:11.304 fused_ordering(41) 00:13:11.304 fused_ordering(42) 00:13:11.304 fused_ordering(43) 00:13:11.304 fused_ordering(44) 00:13:11.304 fused_ordering(45) 00:13:11.304 fused_ordering(46) 00:13:11.304 fused_ordering(47) 00:13:11.304 fused_ordering(48) 00:13:11.304 fused_ordering(49) 00:13:11.304 fused_ordering(50) 00:13:11.304 fused_ordering(51) 00:13:11.304 fused_ordering(52) 00:13:11.304 fused_ordering(53) 00:13:11.304 fused_ordering(54) 00:13:11.304 fused_ordering(55) 00:13:11.304 fused_ordering(56) 00:13:11.304 fused_ordering(57) 00:13:11.304 fused_ordering(58) 00:13:11.304 fused_ordering(59) 00:13:11.304 fused_ordering(60) 00:13:11.304 fused_ordering(61) 00:13:11.304 fused_ordering(62) 00:13:11.304 fused_ordering(63) 00:13:11.304 fused_ordering(64) 00:13:11.304 fused_ordering(65) 00:13:11.304 fused_ordering(66) 00:13:11.304 fused_ordering(67) 00:13:11.304 fused_ordering(68) 00:13:11.304 fused_ordering(69) 00:13:11.304 fused_ordering(70) 00:13:11.304 fused_ordering(71) 00:13:11.304 fused_ordering(72) 00:13:11.304 fused_ordering(73) 00:13:11.304 fused_ordering(74) 00:13:11.304 fused_ordering(75) 00:13:11.304 fused_ordering(76) 00:13:11.304 fused_ordering(77) 00:13:11.304 fused_ordering(78) 00:13:11.304 fused_ordering(79) 00:13:11.304 fused_ordering(80) 00:13:11.304 fused_ordering(81) 00:13:11.304 fused_ordering(82) 00:13:11.304 fused_ordering(83) 00:13:11.304 fused_ordering(84) 00:13:11.304 fused_ordering(85) 00:13:11.304 fused_ordering(86) 00:13:11.304 fused_ordering(87) 00:13:11.304 fused_ordering(88) 00:13:11.304 fused_ordering(89) 00:13:11.304 fused_ordering(90) 00:13:11.304 fused_ordering(91) 00:13:11.304 fused_ordering(92) 00:13:11.304 fused_ordering(93) 00:13:11.304 fused_ordering(94) 00:13:11.304 fused_ordering(95) 00:13:11.304 fused_ordering(96) 00:13:11.304 fused_ordering(97) 00:13:11.304 fused_ordering(98) 00:13:11.304 fused_ordering(99) 00:13:11.304 fused_ordering(100) 00:13:11.304 fused_ordering(101) 00:13:11.304 fused_ordering(102) 00:13:11.304 fused_ordering(103) 00:13:11.304 fused_ordering(104) 00:13:11.304 fused_ordering(105) 00:13:11.304 fused_ordering(106) 00:13:11.304 fused_ordering(107) 00:13:11.304 fused_ordering(108) 00:13:11.304 fused_ordering(109) 00:13:11.304 fused_ordering(110) 00:13:11.304 fused_ordering(111) 00:13:11.305 fused_ordering(112) 00:13:11.305 fused_ordering(113) 00:13:11.305 fused_ordering(114) 00:13:11.305 fused_ordering(115) 00:13:11.305 fused_ordering(116) 00:13:11.305 fused_ordering(117) 00:13:11.305 fused_ordering(118) 00:13:11.305 fused_ordering(119) 00:13:11.305 fused_ordering(120) 00:13:11.305 fused_ordering(121) 00:13:11.305 fused_ordering(122) 00:13:11.305 fused_ordering(123) 00:13:11.305 fused_ordering(124) 00:13:11.305 fused_ordering(125) 00:13:11.305 fused_ordering(126) 00:13:11.305 fused_ordering(127) 00:13:11.305 fused_ordering(128) 00:13:11.305 fused_ordering(129) 00:13:11.305 fused_ordering(130) 00:13:11.305 fused_ordering(131) 00:13:11.305 fused_ordering(132) 00:13:11.305 fused_ordering(133) 00:13:11.305 fused_ordering(134) 00:13:11.305 fused_ordering(135) 00:13:11.305 fused_ordering(136) 00:13:11.305 fused_ordering(137) 00:13:11.305 fused_ordering(138) 00:13:11.305 fused_ordering(139) 00:13:11.305 fused_ordering(140) 00:13:11.305 fused_ordering(141) 00:13:11.305 fused_ordering(142) 00:13:11.305 fused_ordering(143) 00:13:11.305 fused_ordering(144) 00:13:11.305 fused_ordering(145) 00:13:11.305 fused_ordering(146) 00:13:11.305 fused_ordering(147) 00:13:11.305 fused_ordering(148) 00:13:11.305 fused_ordering(149) 00:13:11.305 fused_ordering(150) 00:13:11.305 fused_ordering(151) 00:13:11.305 fused_ordering(152) 00:13:11.305 fused_ordering(153) 00:13:11.305 fused_ordering(154) 00:13:11.305 fused_ordering(155) 00:13:11.305 fused_ordering(156) 00:13:11.305 fused_ordering(157) 00:13:11.305 fused_ordering(158) 00:13:11.305 fused_ordering(159) 00:13:11.305 fused_ordering(160) 00:13:11.305 fused_ordering(161) 00:13:11.305 fused_ordering(162) 00:13:11.305 fused_ordering(163) 00:13:11.305 fused_ordering(164) 00:13:11.305 fused_ordering(165) 00:13:11.305 fused_ordering(166) 00:13:11.305 fused_ordering(167) 00:13:11.305 fused_ordering(168) 00:13:11.305 fused_ordering(169) 00:13:11.305 fused_ordering(170) 00:13:11.305 fused_ordering(171) 00:13:11.305 fused_ordering(172) 00:13:11.305 fused_ordering(173) 00:13:11.305 fused_ordering(174) 00:13:11.305 fused_ordering(175) 00:13:11.305 fused_ordering(176) 00:13:11.305 fused_ordering(177) 00:13:11.305 fused_ordering(178) 00:13:11.305 fused_ordering(179) 00:13:11.305 fused_ordering(180) 00:13:11.305 fused_ordering(181) 00:13:11.305 fused_ordering(182) 00:13:11.305 fused_ordering(183) 00:13:11.305 fused_ordering(184) 00:13:11.305 fused_ordering(185) 00:13:11.305 fused_ordering(186) 00:13:11.305 fused_ordering(187) 00:13:11.305 fused_ordering(188) 00:13:11.305 fused_ordering(189) 00:13:11.305 fused_ordering(190) 00:13:11.305 fused_ordering(191) 00:13:11.305 fused_ordering(192) 00:13:11.305 fused_ordering(193) 00:13:11.305 fused_ordering(194) 00:13:11.305 fused_ordering(195) 00:13:11.305 fused_ordering(196) 00:13:11.305 fused_ordering(197) 00:13:11.305 fused_ordering(198) 00:13:11.305 fused_ordering(199) 00:13:11.305 fused_ordering(200) 00:13:11.305 fused_ordering(201) 00:13:11.305 fused_ordering(202) 00:13:11.305 fused_ordering(203) 00:13:11.305 fused_ordering(204) 00:13:11.305 fused_ordering(205) 00:13:11.305 fused_ordering(206) 00:13:11.305 fused_ordering(207) 00:13:11.305 fused_ordering(208) 00:13:11.305 fused_ordering(209) 00:13:11.305 fused_ordering(210) 00:13:11.305 fused_ordering(211) 00:13:11.305 fused_ordering(212) 00:13:11.305 fused_ordering(213) 00:13:11.305 fused_ordering(214) 00:13:11.305 fused_ordering(215) 00:13:11.305 fused_ordering(216) 00:13:11.305 fused_ordering(217) 00:13:11.305 fused_ordering(218) 00:13:11.305 fused_ordering(219) 00:13:11.305 fused_ordering(220) 00:13:11.305 fused_ordering(221) 00:13:11.305 fused_ordering(222) 00:13:11.305 fused_ordering(223) 00:13:11.305 fused_ordering(224) 00:13:11.305 fused_ordering(225) 00:13:11.305 fused_ordering(226) 00:13:11.305 fused_ordering(227) 00:13:11.305 fused_ordering(228) 00:13:11.305 fused_ordering(229) 00:13:11.305 fused_ordering(230) 00:13:11.305 fused_ordering(231) 00:13:11.305 fused_ordering(232) 00:13:11.305 fused_ordering(233) 00:13:11.305 fused_ordering(234) 00:13:11.305 fused_ordering(235) 00:13:11.305 fused_ordering(236) 00:13:11.305 fused_ordering(237) 00:13:11.305 fused_ordering(238) 00:13:11.305 fused_ordering(239) 00:13:11.305 fused_ordering(240) 00:13:11.305 fused_ordering(241) 00:13:11.305 fused_ordering(242) 00:13:11.305 fused_ordering(243) 00:13:11.305 fused_ordering(244) 00:13:11.305 fused_ordering(245) 00:13:11.305 fused_ordering(246) 00:13:11.305 fused_ordering(247) 00:13:11.305 fused_ordering(248) 00:13:11.305 fused_ordering(249) 00:13:11.305 fused_ordering(250) 00:13:11.305 fused_ordering(251) 00:13:11.305 fused_ordering(252) 00:13:11.305 fused_ordering(253) 00:13:11.305 fused_ordering(254) 00:13:11.305 fused_ordering(255) 00:13:11.305 fused_ordering(256) 00:13:11.305 fused_ordering(257) 00:13:11.305 fused_ordering(258) 00:13:11.305 fused_ordering(259) 00:13:11.305 fused_ordering(260) 00:13:11.305 fused_ordering(261) 00:13:11.305 fused_ordering(262) 00:13:11.305 fused_ordering(263) 00:13:11.305 fused_ordering(264) 00:13:11.305 fused_ordering(265) 00:13:11.305 fused_ordering(266) 00:13:11.305 fused_ordering(267) 00:13:11.305 fused_ordering(268) 00:13:11.305 fused_ordering(269) 00:13:11.305 fused_ordering(270) 00:13:11.305 fused_ordering(271) 00:13:11.305 fused_ordering(272) 00:13:11.305 fused_ordering(273) 00:13:11.305 fused_ordering(274) 00:13:11.305 fused_ordering(275) 00:13:11.305 fused_ordering(276) 00:13:11.305 fused_ordering(277) 00:13:11.305 fused_ordering(278) 00:13:11.305 fused_ordering(279) 00:13:11.305 fused_ordering(280) 00:13:11.305 fused_ordering(281) 00:13:11.305 fused_ordering(282) 00:13:11.305 fused_ordering(283) 00:13:11.305 fused_ordering(284) 00:13:11.305 fused_ordering(285) 00:13:11.305 fused_ordering(286) 00:13:11.305 fused_ordering(287) 00:13:11.305 fused_ordering(288) 00:13:11.305 fused_ordering(289) 00:13:11.305 fused_ordering(290) 00:13:11.305 fused_ordering(291) 00:13:11.305 fused_ordering(292) 00:13:11.305 fused_ordering(293) 00:13:11.305 fused_ordering(294) 00:13:11.305 fused_ordering(295) 00:13:11.305 fused_ordering(296) 00:13:11.305 fused_ordering(297) 00:13:11.305 fused_ordering(298) 00:13:11.305 fused_ordering(299) 00:13:11.305 fused_ordering(300) 00:13:11.305 fused_ordering(301) 00:13:11.305 fused_ordering(302) 00:13:11.305 fused_ordering(303) 00:13:11.305 fused_ordering(304) 00:13:11.305 fused_ordering(305) 00:13:11.305 fused_ordering(306) 00:13:11.305 fused_ordering(307) 00:13:11.305 fused_ordering(308) 00:13:11.305 fused_ordering(309) 00:13:11.305 fused_ordering(310) 00:13:11.305 fused_ordering(311) 00:13:11.305 fused_ordering(312) 00:13:11.305 fused_ordering(313) 00:13:11.305 fused_ordering(314) 00:13:11.305 fused_ordering(315) 00:13:11.305 fused_ordering(316) 00:13:11.305 fused_ordering(317) 00:13:11.305 fused_ordering(318) 00:13:11.305 fused_ordering(319) 00:13:11.305 fused_ordering(320) 00:13:11.305 fused_ordering(321) 00:13:11.305 fused_ordering(322) 00:13:11.305 fused_ordering(323) 00:13:11.305 fused_ordering(324) 00:13:11.305 fused_ordering(325) 00:13:11.305 fused_ordering(326) 00:13:11.305 fused_ordering(327) 00:13:11.305 fused_ordering(328) 00:13:11.305 fused_ordering(329) 00:13:11.305 fused_ordering(330) 00:13:11.305 fused_ordering(331) 00:13:11.305 fused_ordering(332) 00:13:11.305 fused_ordering(333) 00:13:11.305 fused_ordering(334) 00:13:11.305 fused_ordering(335) 00:13:11.305 fused_ordering(336) 00:13:11.305 fused_ordering(337) 00:13:11.305 fused_ordering(338) 00:13:11.305 fused_ordering(339) 00:13:11.305 fused_ordering(340) 00:13:11.305 fused_ordering(341) 00:13:11.305 fused_ordering(342) 00:13:11.305 fused_ordering(343) 00:13:11.305 fused_ordering(344) 00:13:11.305 fused_ordering(345) 00:13:11.305 fused_ordering(346) 00:13:11.305 fused_ordering(347) 00:13:11.305 fused_ordering(348) 00:13:11.305 fused_ordering(349) 00:13:11.305 fused_ordering(350) 00:13:11.305 fused_ordering(351) 00:13:11.305 fused_ordering(352) 00:13:11.305 fused_ordering(353) 00:13:11.305 fused_ordering(354) 00:13:11.305 fused_ordering(355) 00:13:11.305 fused_ordering(356) 00:13:11.305 fused_ordering(357) 00:13:11.305 fused_ordering(358) 00:13:11.305 fused_ordering(359) 00:13:11.305 fused_ordering(360) 00:13:11.305 fused_ordering(361) 00:13:11.305 fused_ordering(362) 00:13:11.305 fused_ordering(363) 00:13:11.305 fused_ordering(364) 00:13:11.305 fused_ordering(365) 00:13:11.305 fused_ordering(366) 00:13:11.305 fused_ordering(367) 00:13:11.305 fused_ordering(368) 00:13:11.305 fused_ordering(369) 00:13:11.305 fused_ordering(370) 00:13:11.305 fused_ordering(371) 00:13:11.305 fused_ordering(372) 00:13:11.305 fused_ordering(373) 00:13:11.305 fused_ordering(374) 00:13:11.305 fused_ordering(375) 00:13:11.305 fused_ordering(376) 00:13:11.305 fused_ordering(377) 00:13:11.305 fused_ordering(378) 00:13:11.305 fused_ordering(379) 00:13:11.305 fused_ordering(380) 00:13:11.305 fused_ordering(381) 00:13:11.305 fused_ordering(382) 00:13:11.305 fused_ordering(383) 00:13:11.305 fused_ordering(384) 00:13:11.305 fused_ordering(385) 00:13:11.305 fused_ordering(386) 00:13:11.305 fused_ordering(387) 00:13:11.305 fused_ordering(388) 00:13:11.305 fused_ordering(389) 00:13:11.305 fused_ordering(390) 00:13:11.305 fused_ordering(391) 00:13:11.305 fused_ordering(392) 00:13:11.305 fused_ordering(393) 00:13:11.305 fused_ordering(394) 00:13:11.305 fused_ordering(395) 00:13:11.305 fused_ordering(396) 00:13:11.305 fused_ordering(397) 00:13:11.305 fused_ordering(398) 00:13:11.305 fused_ordering(399) 00:13:11.305 fused_ordering(400) 00:13:11.306 fused_ordering(401) 00:13:11.306 fused_ordering(402) 00:13:11.306 fused_ordering(403) 00:13:11.306 fused_ordering(404) 00:13:11.306 fused_ordering(405) 00:13:11.306 fused_ordering(406) 00:13:11.306 fused_ordering(407) 00:13:11.306 fused_ordering(408) 00:13:11.306 fused_ordering(409) 00:13:11.306 fused_ordering(410) 00:13:11.565 fused_ordering(411) 00:13:11.565 fused_ordering(412) 00:13:11.565 fused_ordering(413) 00:13:11.565 fused_ordering(414) 00:13:11.565 fused_ordering(415) 00:13:11.565 fused_ordering(416) 00:13:11.565 fused_ordering(417) 00:13:11.565 fused_ordering(418) 00:13:11.565 fused_ordering(419) 00:13:11.565 fused_ordering(420) 00:13:11.565 fused_ordering(421) 00:13:11.565 fused_ordering(422) 00:13:11.565 fused_ordering(423) 00:13:11.565 fused_ordering(424) 00:13:11.565 fused_ordering(425) 00:13:11.565 fused_ordering(426) 00:13:11.565 fused_ordering(427) 00:13:11.565 fused_ordering(428) 00:13:11.565 fused_ordering(429) 00:13:11.565 fused_ordering(430) 00:13:11.565 fused_ordering(431) 00:13:11.565 fused_ordering(432) 00:13:11.565 fused_ordering(433) 00:13:11.565 fused_ordering(434) 00:13:11.565 fused_ordering(435) 00:13:11.565 fused_ordering(436) 00:13:11.565 fused_ordering(437) 00:13:11.565 fused_ordering(438) 00:13:11.565 fused_ordering(439) 00:13:11.565 fused_ordering(440) 00:13:11.565 fused_ordering(441) 00:13:11.565 fused_ordering(442) 00:13:11.565 fused_ordering(443) 00:13:11.565 fused_ordering(444) 00:13:11.565 fused_ordering(445) 00:13:11.565 fused_ordering(446) 00:13:11.565 fused_ordering(447) 00:13:11.565 fused_ordering(448) 00:13:11.565 fused_ordering(449) 00:13:11.565 fused_ordering(450) 00:13:11.565 fused_ordering(451) 00:13:11.565 fused_ordering(452) 00:13:11.565 fused_ordering(453) 00:13:11.565 fused_ordering(454) 00:13:11.565 fused_ordering(455) 00:13:11.565 fused_ordering(456) 00:13:11.565 fused_ordering(457) 00:13:11.565 fused_ordering(458) 00:13:11.565 fused_ordering(459) 00:13:11.565 fused_ordering(460) 00:13:11.565 fused_ordering(461) 00:13:11.565 fused_ordering(462) 00:13:11.565 fused_ordering(463) 00:13:11.565 fused_ordering(464) 00:13:11.565 fused_ordering(465) 00:13:11.565 fused_ordering(466) 00:13:11.565 fused_ordering(467) 00:13:11.565 fused_ordering(468) 00:13:11.565 fused_ordering(469) 00:13:11.565 fused_ordering(470) 00:13:11.565 fused_ordering(471) 00:13:11.565 fused_ordering(472) 00:13:11.565 fused_ordering(473) 00:13:11.565 fused_ordering(474) 00:13:11.565 fused_ordering(475) 00:13:11.565 fused_ordering(476) 00:13:11.565 fused_ordering(477) 00:13:11.565 fused_ordering(478) 00:13:11.565 fused_ordering(479) 00:13:11.565 fused_ordering(480) 00:13:11.565 fused_ordering(481) 00:13:11.565 fused_ordering(482) 00:13:11.565 fused_ordering(483) 00:13:11.566 fused_ordering(484) 00:13:11.566 fused_ordering(485) 00:13:11.566 fused_ordering(486) 00:13:11.566 fused_ordering(487) 00:13:11.566 fused_ordering(488) 00:13:11.566 fused_ordering(489) 00:13:11.566 fused_ordering(490) 00:13:11.566 fused_ordering(491) 00:13:11.566 fused_ordering(492) 00:13:11.566 fused_ordering(493) 00:13:11.566 fused_ordering(494) 00:13:11.566 fused_ordering(495) 00:13:11.566 fused_ordering(496) 00:13:11.566 fused_ordering(497) 00:13:11.566 fused_ordering(498) 00:13:11.566 fused_ordering(499) 00:13:11.566 fused_ordering(500) 00:13:11.566 fused_ordering(501) 00:13:11.566 fused_ordering(502) 00:13:11.566 fused_ordering(503) 00:13:11.566 fused_ordering(504) 00:13:11.566 fused_ordering(505) 00:13:11.566 fused_ordering(506) 00:13:11.566 fused_ordering(507) 00:13:11.566 fused_ordering(508) 00:13:11.566 fused_ordering(509) 00:13:11.566 fused_ordering(510) 00:13:11.566 fused_ordering(511) 00:13:11.566 fused_ordering(512) 00:13:11.566 fused_ordering(513) 00:13:11.566 fused_ordering(514) 00:13:11.566 fused_ordering(515) 00:13:11.566 fused_ordering(516) 00:13:11.566 fused_ordering(517) 00:13:11.566 fused_ordering(518) 00:13:11.566 fused_ordering(519) 00:13:11.566 fused_ordering(520) 00:13:11.566 fused_ordering(521) 00:13:11.566 fused_ordering(522) 00:13:11.566 fused_ordering(523) 00:13:11.566 fused_ordering(524) 00:13:11.566 fused_ordering(525) 00:13:11.566 fused_ordering(526) 00:13:11.566 fused_ordering(527) 00:13:11.566 fused_ordering(528) 00:13:11.566 fused_ordering(529) 00:13:11.566 fused_ordering(530) 00:13:11.566 fused_ordering(531) 00:13:11.566 fused_ordering(532) 00:13:11.566 fused_ordering(533) 00:13:11.566 fused_ordering(534) 00:13:11.566 fused_ordering(535) 00:13:11.566 fused_ordering(536) 00:13:11.566 fused_ordering(537) 00:13:11.566 fused_ordering(538) 00:13:11.566 fused_ordering(539) 00:13:11.566 fused_ordering(540) 00:13:11.566 fused_ordering(541) 00:13:11.566 fused_ordering(542) 00:13:11.566 fused_ordering(543) 00:13:11.566 fused_ordering(544) 00:13:11.566 fused_ordering(545) 00:13:11.566 fused_ordering(546) 00:13:11.566 fused_ordering(547) 00:13:11.566 fused_ordering(548) 00:13:11.566 fused_ordering(549) 00:13:11.566 fused_ordering(550) 00:13:11.566 fused_ordering(551) 00:13:11.566 fused_ordering(552) 00:13:11.566 fused_ordering(553) 00:13:11.566 fused_ordering(554) 00:13:11.566 fused_ordering(555) 00:13:11.566 fused_ordering(556) 00:13:11.566 fused_ordering(557) 00:13:11.566 fused_ordering(558) 00:13:11.566 fused_ordering(559) 00:13:11.566 fused_ordering(560) 00:13:11.566 fused_ordering(561) 00:13:11.566 fused_ordering(562) 00:13:11.566 fused_ordering(563) 00:13:11.566 fused_ordering(564) 00:13:11.566 fused_ordering(565) 00:13:11.566 fused_ordering(566) 00:13:11.566 fused_ordering(567) 00:13:11.566 fused_ordering(568) 00:13:11.566 fused_ordering(569) 00:13:11.566 fused_ordering(570) 00:13:11.566 fused_ordering(571) 00:13:11.566 fused_ordering(572) 00:13:11.566 fused_ordering(573) 00:13:11.566 fused_ordering(574) 00:13:11.566 fused_ordering(575) 00:13:11.566 fused_ordering(576) 00:13:11.566 fused_ordering(577) 00:13:11.566 fused_ordering(578) 00:13:11.566 fused_ordering(579) 00:13:11.566 fused_ordering(580) 00:13:11.566 fused_ordering(581) 00:13:11.566 fused_ordering(582) 00:13:11.566 fused_ordering(583) 00:13:11.566 fused_ordering(584) 00:13:11.566 fused_ordering(585) 00:13:11.566 fused_ordering(586) 00:13:11.566 fused_ordering(587) 00:13:11.566 fused_ordering(588) 00:13:11.566 fused_ordering(589) 00:13:11.566 fused_ordering(590) 00:13:11.566 fused_ordering(591) 00:13:11.566 fused_ordering(592) 00:13:11.566 fused_ordering(593) 00:13:11.566 fused_ordering(594) 00:13:11.566 fused_ordering(595) 00:13:11.566 fused_ordering(596) 00:13:11.566 fused_ordering(597) 00:13:11.566 fused_ordering(598) 00:13:11.566 fused_ordering(599) 00:13:11.566 fused_ordering(600) 00:13:11.566 fused_ordering(601) 00:13:11.566 fused_ordering(602) 00:13:11.566 fused_ordering(603) 00:13:11.566 fused_ordering(604) 00:13:11.566 fused_ordering(605) 00:13:11.566 fused_ordering(606) 00:13:11.566 fused_ordering(607) 00:13:11.566 fused_ordering(608) 00:13:11.566 fused_ordering(609) 00:13:11.566 fused_ordering(610) 00:13:11.566 fused_ordering(611) 00:13:11.566 fused_ordering(612) 00:13:11.566 fused_ordering(613) 00:13:11.566 fused_ordering(614) 00:13:11.566 fused_ordering(615) 00:13:11.566 fused_ordering(616) 00:13:11.566 fused_ordering(617) 00:13:11.566 fused_ordering(618) 00:13:11.566 fused_ordering(619) 00:13:11.566 fused_ordering(620) 00:13:11.566 fused_ordering(621) 00:13:11.566 fused_ordering(622) 00:13:11.566 fused_ordering(623) 00:13:11.566 fused_ordering(624) 00:13:11.566 fused_ordering(625) 00:13:11.566 fused_ordering(626) 00:13:11.566 fused_ordering(627) 00:13:11.566 fused_ordering(628) 00:13:11.566 fused_ordering(629) 00:13:11.566 fused_ordering(630) 00:13:11.566 fused_ordering(631) 00:13:11.566 fused_ordering(632) 00:13:11.566 fused_ordering(633) 00:13:11.566 fused_ordering(634) 00:13:11.566 fused_ordering(635) 00:13:11.566 fused_ordering(636) 00:13:11.566 fused_ordering(637) 00:13:11.566 fused_ordering(638) 00:13:11.566 fused_ordering(639) 00:13:11.566 fused_ordering(640) 00:13:11.566 fused_ordering(641) 00:13:11.566 fused_ordering(642) 00:13:11.566 fused_ordering(643) 00:13:11.566 fused_ordering(644) 00:13:11.566 fused_ordering(645) 00:13:11.566 fused_ordering(646) 00:13:11.566 fused_ordering(647) 00:13:11.566 fused_ordering(648) 00:13:11.566 fused_ordering(649) 00:13:11.566 fused_ordering(650) 00:13:11.566 fused_ordering(651) 00:13:11.566 fused_ordering(652) 00:13:11.566 fused_ordering(653) 00:13:11.566 fused_ordering(654) 00:13:11.566 fused_ordering(655) 00:13:11.566 fused_ordering(656) 00:13:11.566 fused_ordering(657) 00:13:11.566 fused_ordering(658) 00:13:11.566 fused_ordering(659) 00:13:11.566 fused_ordering(660) 00:13:11.566 fused_ordering(661) 00:13:11.566 fused_ordering(662) 00:13:11.566 fused_ordering(663) 00:13:11.566 fused_ordering(664) 00:13:11.566 fused_ordering(665) 00:13:11.566 fused_ordering(666) 00:13:11.566 fused_ordering(667) 00:13:11.566 fused_ordering(668) 00:13:11.566 fused_ordering(669) 00:13:11.566 fused_ordering(670) 00:13:11.566 fused_ordering(671) 00:13:11.566 fused_ordering(672) 00:13:11.566 fused_ordering(673) 00:13:11.566 fused_ordering(674) 00:13:11.566 fused_ordering(675) 00:13:11.566 fused_ordering(676) 00:13:11.566 fused_ordering(677) 00:13:11.566 fused_ordering(678) 00:13:11.566 fused_ordering(679) 00:13:11.566 fused_ordering(680) 00:13:11.566 fused_ordering(681) 00:13:11.566 fused_ordering(682) 00:13:11.566 fused_ordering(683) 00:13:11.566 fused_ordering(684) 00:13:11.566 fused_ordering(685) 00:13:11.566 fused_ordering(686) 00:13:11.566 fused_ordering(687) 00:13:11.566 fused_ordering(688) 00:13:11.566 fused_ordering(689) 00:13:11.566 fused_ordering(690) 00:13:11.566 fused_ordering(691) 00:13:11.566 fused_ordering(692) 00:13:11.566 fused_ordering(693) 00:13:11.566 fused_ordering(694) 00:13:11.566 fused_ordering(695) 00:13:11.566 fused_ordering(696) 00:13:11.566 fused_ordering(697) 00:13:11.566 fused_ordering(698) 00:13:11.566 fused_ordering(699) 00:13:11.566 fused_ordering(700) 00:13:11.566 fused_ordering(701) 00:13:11.566 fused_ordering(702) 00:13:11.566 fused_ordering(703) 00:13:11.566 fused_ordering(704) 00:13:11.566 fused_ordering(705) 00:13:11.566 fused_ordering(706) 00:13:11.566 fused_ordering(707) 00:13:11.566 fused_ordering(708) 00:13:11.566 fused_ordering(709) 00:13:11.566 fused_ordering(710) 00:13:11.566 fused_ordering(711) 00:13:11.566 fused_ordering(712) 00:13:11.566 fused_ordering(713) 00:13:11.566 fused_ordering(714) 00:13:11.566 fused_ordering(715) 00:13:11.566 fused_ordering(716) 00:13:11.566 fused_ordering(717) 00:13:11.566 fused_ordering(718) 00:13:11.566 fused_ordering(719) 00:13:11.566 fused_ordering(720) 00:13:11.566 fused_ordering(721) 00:13:11.566 fused_ordering(722) 00:13:11.566 fused_ordering(723) 00:13:11.566 fused_ordering(724) 00:13:11.566 fused_ordering(725) 00:13:11.566 fused_ordering(726) 00:13:11.566 fused_ordering(727) 00:13:11.566 fused_ordering(728) 00:13:11.566 fused_ordering(729) 00:13:11.566 fused_ordering(730) 00:13:11.566 fused_ordering(731) 00:13:11.566 fused_ordering(732) 00:13:11.566 fused_ordering(733) 00:13:11.566 fused_ordering(734) 00:13:11.566 fused_ordering(735) 00:13:11.566 fused_ordering(736) 00:13:11.566 fused_ordering(737) 00:13:11.566 fused_ordering(738) 00:13:11.566 fused_ordering(739) 00:13:11.566 fused_ordering(740) 00:13:11.566 fused_ordering(741) 00:13:11.566 fused_ordering(742) 00:13:11.566 fused_ordering(743) 00:13:11.566 fused_ordering(744) 00:13:11.566 fused_ordering(745) 00:13:11.566 fused_ordering(746) 00:13:11.566 fused_ordering(747) 00:13:11.566 fused_ordering(748) 00:13:11.566 fused_ordering(749) 00:13:11.566 fused_ordering(750) 00:13:11.566 fused_ordering(751) 00:13:11.566 fused_ordering(752) 00:13:11.566 fused_ordering(753) 00:13:11.566 fused_ordering(754) 00:13:11.566 fused_ordering(755) 00:13:11.566 fused_ordering(756) 00:13:11.566 fused_ordering(757) 00:13:11.566 fused_ordering(758) 00:13:11.566 fused_ordering(759) 00:13:11.566 fused_ordering(760) 00:13:11.566 fused_ordering(761) 00:13:11.566 fused_ordering(762) 00:13:11.566 fused_ordering(763) 00:13:11.566 fused_ordering(764) 00:13:11.566 fused_ordering(765) 00:13:11.566 fused_ordering(766) 00:13:11.566 fused_ordering(767) 00:13:11.566 fused_ordering(768) 00:13:11.566 fused_ordering(769) 00:13:11.566 fused_ordering(770) 00:13:11.566 fused_ordering(771) 00:13:11.566 fused_ordering(772) 00:13:11.566 fused_ordering(773) 00:13:11.566 fused_ordering(774) 00:13:11.566 fused_ordering(775) 00:13:11.567 fused_ordering(776) 00:13:11.567 fused_ordering(777) 00:13:11.567 fused_ordering(778) 00:13:11.567 fused_ordering(779) 00:13:11.567 fused_ordering(780) 00:13:11.567 fused_ordering(781) 00:13:11.567 fused_ordering(782) 00:13:11.567 fused_ordering(783) 00:13:11.567 fused_ordering(784) 00:13:11.567 fused_ordering(785) 00:13:11.567 fused_ordering(786) 00:13:11.567 fused_ordering(787) 00:13:11.567 fused_ordering(788) 00:13:11.567 fused_ordering(789) 00:13:11.567 fused_ordering(790) 00:13:11.567 fused_ordering(791) 00:13:11.567 fused_ordering(792) 00:13:11.567 fused_ordering(793) 00:13:11.567 fused_ordering(794) 00:13:11.567 fused_ordering(795) 00:13:11.567 fused_ordering(796) 00:13:11.567 fused_ordering(797) 00:13:11.567 fused_ordering(798) 00:13:11.567 fused_ordering(799) 00:13:11.567 fused_ordering(800) 00:13:11.567 fused_ordering(801) 00:13:11.567 fused_ordering(802) 00:13:11.567 fused_ordering(803) 00:13:11.567 fused_ordering(804) 00:13:11.567 fused_ordering(805) 00:13:11.567 fused_ordering(806) 00:13:11.567 fused_ordering(807) 00:13:11.567 fused_ordering(808) 00:13:11.567 fused_ordering(809) 00:13:11.567 fused_ordering(810) 00:13:11.567 fused_ordering(811) 00:13:11.567 fused_ordering(812) 00:13:11.567 fused_ordering(813) 00:13:11.567 fused_ordering(814) 00:13:11.567 fused_ordering(815) 00:13:11.567 fused_ordering(816) 00:13:11.567 fused_ordering(817) 00:13:11.567 fused_ordering(818) 00:13:11.567 fused_ordering(819) 00:13:11.567 fused_ordering(820) 00:13:11.827 fused_ordering(821) 00:13:11.827 fused_ordering(822) 00:13:11.827 fused_ordering(823) 00:13:11.827 fused_ordering(824) 00:13:11.827 fused_ordering(825) 00:13:11.827 fused_ordering(826) 00:13:11.827 fused_ordering(827) 00:13:11.827 fused_ordering(828) 00:13:11.827 fused_ordering(829) 00:13:11.827 fused_ordering(830) 00:13:11.827 fused_ordering(831) 00:13:11.827 fused_ordering(832) 00:13:11.827 fused_ordering(833) 00:13:11.827 fused_ordering(834) 00:13:11.827 fused_ordering(835) 00:13:11.827 fused_ordering(836) 00:13:11.827 fused_ordering(837) 00:13:11.827 fused_ordering(838) 00:13:11.827 fused_ordering(839) 00:13:11.827 fused_ordering(840) 00:13:11.827 fused_ordering(841) 00:13:11.827 fused_ordering(842) 00:13:11.827 fused_ordering(843) 00:13:11.827 fused_ordering(844) 00:13:11.827 fused_ordering(845) 00:13:11.827 fused_ordering(846) 00:13:11.827 fused_ordering(847) 00:13:11.827 fused_ordering(848) 00:13:11.827 fused_ordering(849) 00:13:11.827 fused_ordering(850) 00:13:11.827 fused_ordering(851) 00:13:11.827 fused_ordering(852) 00:13:11.827 fused_ordering(853) 00:13:11.827 fused_ordering(854) 00:13:11.827 fused_ordering(855) 00:13:11.827 fused_ordering(856) 00:13:11.827 fused_ordering(857) 00:13:11.827 fused_ordering(858) 00:13:11.827 fused_ordering(859) 00:13:11.827 fused_ordering(860) 00:13:11.827 fused_ordering(861) 00:13:11.827 fused_ordering(862) 00:13:11.827 fused_ordering(863) 00:13:11.827 fused_ordering(864) 00:13:11.827 fused_ordering(865) 00:13:11.827 fused_ordering(866) 00:13:11.827 fused_ordering(867) 00:13:11.827 fused_ordering(868) 00:13:11.827 fused_ordering(869) 00:13:11.827 fused_ordering(870) 00:13:11.827 fused_ordering(871) 00:13:11.827 fused_ordering(872) 00:13:11.827 fused_ordering(873) 00:13:11.827 fused_ordering(874) 00:13:11.827 fused_ordering(875) 00:13:11.827 fused_ordering(876) 00:13:11.827 fused_ordering(877) 00:13:11.827 fused_ordering(878) 00:13:11.827 fused_ordering(879) 00:13:11.827 fused_ordering(880) 00:13:11.827 fused_ordering(881) 00:13:11.827 fused_ordering(882) 00:13:11.827 fused_ordering(883) 00:13:11.827 fused_ordering(884) 00:13:11.827 fused_ordering(885) 00:13:11.827 fused_ordering(886) 00:13:11.827 fused_ordering(887) 00:13:11.827 fused_ordering(888) 00:13:11.827 fused_ordering(889) 00:13:11.827 fused_ordering(890) 00:13:11.827 fused_ordering(891) 00:13:11.827 fused_ordering(892) 00:13:11.827 fused_ordering(893) 00:13:11.827 fused_ordering(894) 00:13:11.827 fused_ordering(895) 00:13:11.827 fused_ordering(896) 00:13:11.827 fused_ordering(897) 00:13:11.827 fused_ordering(898) 00:13:11.827 fused_ordering(899) 00:13:11.827 fused_ordering(900) 00:13:11.827 fused_ordering(901) 00:13:11.827 fused_ordering(902) 00:13:11.827 fused_ordering(903) 00:13:11.827 fused_ordering(904) 00:13:11.827 fused_ordering(905) 00:13:11.827 fused_ordering(906) 00:13:11.827 fused_ordering(907) 00:13:11.827 fused_ordering(908) 00:13:11.827 fused_ordering(909) 00:13:11.827 fused_ordering(910) 00:13:11.827 fused_ordering(911) 00:13:11.827 fused_ordering(912) 00:13:11.827 fused_ordering(913) 00:13:11.827 fused_ordering(914) 00:13:11.827 fused_ordering(915) 00:13:11.827 fused_ordering(916) 00:13:11.827 fused_ordering(917) 00:13:11.827 fused_ordering(918) 00:13:11.827 fused_ordering(919) 00:13:11.827 fused_ordering(920) 00:13:11.827 fused_ordering(921) 00:13:11.827 fused_ordering(922) 00:13:11.827 fused_ordering(923) 00:13:11.827 fused_ordering(924) 00:13:11.827 fused_ordering(925) 00:13:11.827 fused_ordering(926) 00:13:11.827 fused_ordering(927) 00:13:11.827 fused_ordering(928) 00:13:11.827 fused_ordering(929) 00:13:11.827 fused_ordering(930) 00:13:11.827 fused_ordering(931) 00:13:11.827 fused_ordering(932) 00:13:11.827 fused_ordering(933) 00:13:11.827 fused_ordering(934) 00:13:11.827 fused_ordering(935) 00:13:11.827 fused_ordering(936) 00:13:11.827 fused_ordering(937) 00:13:11.827 fused_ordering(938) 00:13:11.827 fused_ordering(939) 00:13:11.827 fused_ordering(940) 00:13:11.827 fused_ordering(941) 00:13:11.827 fused_ordering(942) 00:13:11.827 fused_ordering(943) 00:13:11.827 fused_ordering(944) 00:13:11.827 fused_ordering(945) 00:13:11.827 fused_ordering(946) 00:13:11.827 fused_ordering(947) 00:13:11.827 fused_ordering(948) 00:13:11.827 fused_ordering(949) 00:13:11.827 fused_ordering(950) 00:13:11.827 fused_ordering(951) 00:13:11.827 fused_ordering(952) 00:13:11.827 fused_ordering(953) 00:13:11.827 fused_ordering(954) 00:13:11.827 fused_ordering(955) 00:13:11.827 fused_ordering(956) 00:13:11.827 fused_ordering(957) 00:13:11.827 fused_ordering(958) 00:13:11.827 fused_ordering(959) 00:13:11.827 fused_ordering(960) 00:13:11.827 fused_ordering(961) 00:13:11.827 fused_ordering(962) 00:13:11.827 fused_ordering(963) 00:13:11.827 fused_ordering(964) 00:13:11.827 fused_ordering(965) 00:13:11.827 fused_ordering(966) 00:13:11.827 fused_ordering(967) 00:13:11.827 fused_ordering(968) 00:13:11.827 fused_ordering(969) 00:13:11.827 fused_ordering(970) 00:13:11.827 fused_ordering(971) 00:13:11.827 fused_ordering(972) 00:13:11.827 fused_ordering(973) 00:13:11.827 fused_ordering(974) 00:13:11.827 fused_ordering(975) 00:13:11.827 fused_ordering(976) 00:13:11.827 fused_ordering(977) 00:13:11.827 fused_ordering(978) 00:13:11.827 fused_ordering(979) 00:13:11.827 fused_ordering(980) 00:13:11.827 fused_ordering(981) 00:13:11.827 fused_ordering(982) 00:13:11.827 fused_ordering(983) 00:13:11.827 fused_ordering(984) 00:13:11.827 fused_ordering(985) 00:13:11.827 fused_ordering(986) 00:13:11.827 fused_ordering(987) 00:13:11.827 fused_ordering(988) 00:13:11.827 fused_ordering(989) 00:13:11.827 fused_ordering(990) 00:13:11.827 fused_ordering(991) 00:13:11.827 fused_ordering(992) 00:13:11.827 fused_ordering(993) 00:13:11.827 fused_ordering(994) 00:13:11.827 fused_ordering(995) 00:13:11.827 fused_ordering(996) 00:13:11.827 fused_ordering(997) 00:13:11.827 fused_ordering(998) 00:13:11.827 fused_ordering(999) 00:13:11.827 fused_ordering(1000) 00:13:11.827 fused_ordering(1001) 00:13:11.827 fused_ordering(1002) 00:13:11.827 fused_ordering(1003) 00:13:11.827 fused_ordering(1004) 00:13:11.827 fused_ordering(1005) 00:13:11.827 fused_ordering(1006) 00:13:11.827 fused_ordering(1007) 00:13:11.827 fused_ordering(1008) 00:13:11.827 fused_ordering(1009) 00:13:11.827 fused_ordering(1010) 00:13:11.827 fused_ordering(1011) 00:13:11.827 fused_ordering(1012) 00:13:11.827 fused_ordering(1013) 00:13:11.827 fused_ordering(1014) 00:13:11.827 fused_ordering(1015) 00:13:11.827 fused_ordering(1016) 00:13:11.827 fused_ordering(1017) 00:13:11.827 fused_ordering(1018) 00:13:11.827 fused_ordering(1019) 00:13:11.827 fused_ordering(1020) 00:13:11.827 fused_ordering(1021) 00:13:11.827 fused_ordering(1022) 00:13:11.827 fused_ordering(1023) 00:13:11.827 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:11.827 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:11.827 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:11.827 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:11.827 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:11.827 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:11.827 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:11.827 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:11.827 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:11.827 rmmod nvme_rdma 00:13:11.827 rmmod nvme_fabrics 00:13:11.827 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:11.827 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:11.827 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:11.827 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2309222 ']' 00:13:11.827 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2309222 00:13:11.827 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2309222 ']' 00:13:11.827 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2309222 00:13:11.828 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:11.828 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.828 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2309222 00:13:11.828 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:11.828 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:11.828 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2309222' 00:13:11.828 killing process with pid 2309222 00:13:11.828 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2309222 00:13:11.828 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2309222 00:13:12.087 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:12.087 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:12.087 00:13:12.087 real 0m9.484s 00:13:12.087 user 0m4.944s 00:13:12.087 sys 0m5.972s 00:13:12.087 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.087 18:01:19 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:12.087 ************************************ 00:13:12.087 END TEST nvmf_fused_ordering 00:13:12.087 ************************************ 00:13:12.087 18:01:19 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:13:12.087 18:01:19 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:12.087 18:01:19 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.087 18:01:19 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:12.087 ************************************ 00:13:12.087 START TEST nvmf_ns_masking 00:13:12.087 ************************************ 00:13:12.087 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:13:12.347 * Looking for test storage... 00:13:12.347 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:12.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.347 --rc genhtml_branch_coverage=1 00:13:12.347 --rc genhtml_function_coverage=1 00:13:12.347 --rc genhtml_legend=1 00:13:12.347 --rc geninfo_all_blocks=1 00:13:12.347 --rc geninfo_unexecuted_blocks=1 00:13:12.347 00:13:12.347 ' 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:12.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.347 --rc genhtml_branch_coverage=1 00:13:12.347 --rc genhtml_function_coverage=1 00:13:12.347 --rc genhtml_legend=1 00:13:12.347 --rc geninfo_all_blocks=1 00:13:12.347 --rc geninfo_unexecuted_blocks=1 00:13:12.347 00:13:12.347 ' 00:13:12.347 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:12.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.347 --rc genhtml_branch_coverage=1 00:13:12.347 --rc genhtml_function_coverage=1 00:13:12.347 --rc genhtml_legend=1 00:13:12.348 --rc geninfo_all_blocks=1 00:13:12.348 --rc geninfo_unexecuted_blocks=1 00:13:12.348 00:13:12.348 ' 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:12.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.348 --rc genhtml_branch_coverage=1 00:13:12.348 --rc genhtml_function_coverage=1 00:13:12.348 --rc genhtml_legend=1 00:13:12.348 --rc geninfo_all_blocks=1 00:13:12.348 --rc geninfo_unexecuted_blocks=1 00:13:12.348 00:13:12.348 ' 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:12.348 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=65de727a-be0e-449c-ae8a-a7575a3cda06 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=e0b7b29a-5873-47a3-a56a-3eedab69d635 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=e43f855b-9a9e-421a-aa8e-6a54ee671dc8 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:12.348 18:01:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:20.474 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:20.474 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:20.475 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:20.475 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:20.475 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:20.475 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:20.475 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:20.475 altname enp217s0f0np0 00:13:20.475 altname ens818f0np0 00:13:20.475 inet 192.168.100.8/24 scope global mlx_0_0 00:13:20.475 valid_lft forever preferred_lft forever 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:20.475 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:20.475 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:20.475 altname enp217s0f1np1 00:13:20.475 altname ens818f1np1 00:13:20.475 inet 192.168.100.9/24 scope global mlx_0_1 00:13:20.475 valid_lft forever preferred_lft forever 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.475 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:20.476 192.168.100.9' 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:20.476 192.168.100.9' 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:20.476 192.168.100.9' 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2312948 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2312948 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2312948 ']' 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.476 18:01:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:20.476 [2024-12-09 18:01:27.595387] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:13:20.476 [2024-12-09 18:01:27.595439] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.476 [2024-12-09 18:01:27.687801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.476 [2024-12-09 18:01:27.727215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.476 [2024-12-09 18:01:27.727255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.476 [2024-12-09 18:01:27.727265] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.476 [2024-12-09 18:01:27.727273] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.476 [2024-12-09 18:01:27.727280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.476 [2024-12-09 18:01:27.727888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.476 18:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.476 18:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:20.476 18:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:20.476 18:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:20.476 18:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:20.736 18:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.736 18:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:20.736 [2024-12-09 18:01:28.667197] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xcc76a0/0xccbb90) succeed. 00:13:20.736 [2024-12-09 18:01:28.676223] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xcc8b50/0xd0d230) succeed. 00:13:20.996 18:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:20.996 18:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:20.996 18:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:20.996 Malloc1 00:13:20.996 18:01:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:21.255 Malloc2 00:13:21.255 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:21.514 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:21.772 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:21.772 [2024-12-09 18:01:29.711158] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:21.772 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:21.772 18:01:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e43f855b-9a9e-421a-aa8e-6a54ee671dc8 -a 192.168.100.8 -s 4420 -i 4 00:13:22.340 18:01:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:22.340 18:01:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:22.340 18:01:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:22.340 18:01:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:22.340 18:01:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:24.245 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:24.245 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:24.245 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.245 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:24.245 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.245 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:24.245 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:24.245 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:24.245 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:24.245 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:24.245 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:24.245 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:24.246 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:24.246 [ 0]:0x1 00:13:24.246 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:24.246 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:24.246 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6f4fb9c5d53d47b09b2e723772e73994 00:13:24.246 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6f4fb9c5d53d47b09b2e723772e73994 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:24.246 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:24.505 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:24.505 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:24.505 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:24.505 [ 0]:0x1 00:13:24.505 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:24.505 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:24.505 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6f4fb9c5d53d47b09b2e723772e73994 00:13:24.505 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6f4fb9c5d53d47b09b2e723772e73994 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:24.505 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:24.505 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:24.505 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:24.505 [ 1]:0x2 00:13:24.505 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:24.505 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:24.505 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4f047560bf6454794608196be7fd77b 00:13:24.505 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4f047560bf6454794608196be7fd77b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:24.505 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:24.505 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.073 18:01:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.332 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:25.332 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:25.332 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e43f855b-9a9e-421a-aa8e-6a54ee671dc8 -a 192.168.100.8 -s 4420 -i 4 00:13:25.591 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:25.591 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:25.591 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:25.591 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:25.591 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:25.591 18:01:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:28.126 [ 0]:0x2 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4f047560bf6454794608196be7fd77b 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4f047560bf6454794608196be7fd77b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:28.126 [ 0]:0x1 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6f4fb9c5d53d47b09b2e723772e73994 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6f4fb9c5d53d47b09b2e723772e73994 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:28.126 18:01:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:28.126 [ 1]:0x2 00:13:28.126 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:28.126 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:28.126 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4f047560bf6454794608196be7fd77b 00:13:28.126 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4f047560bf6454794608196be7fd77b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:28.126 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:28.385 [ 0]:0x2 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4f047560bf6454794608196be7fd77b 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4f047560bf6454794608196be7fd77b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:28.385 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:28.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.953 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:28.953 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:28.953 18:01:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e43f855b-9a9e-421a-aa8e-6a54ee671dc8 -a 192.168.100.8 -s 4420 -i 4 00:13:29.212 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:29.212 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:29.212 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.212 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:29.212 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:29.212 18:01:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:31.747 [ 0]:0x1 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6f4fb9c5d53d47b09b2e723772e73994 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6f4fb9c5d53d47b09b2e723772e73994 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:31.747 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:31.748 [ 1]:0x2 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4f047560bf6454794608196be7fd77b 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4f047560bf6454794608196be7fd77b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:31.748 [ 0]:0x2 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4f047560bf6454794608196be7fd77b 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4f047560bf6454794608196be7fd77b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:31.748 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:32.007 [2024-12-09 18:01:39.804758] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:32.007 request: 00:13:32.007 { 00:13:32.007 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.007 "nsid": 2, 00:13:32.007 "host": "nqn.2016-06.io.spdk:host1", 00:13:32.007 "method": "nvmf_ns_remove_host", 00:13:32.007 "req_id": 1 00:13:32.007 } 00:13:32.007 Got JSON-RPC error response 00:13:32.007 response: 00:13:32.007 { 00:13:32.007 "code": -32602, 00:13:32.007 "message": "Invalid parameters" 00:13:32.007 } 00:13:32.007 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:32.007 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:32.007 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:32.007 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:32.007 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:32.007 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:32.007 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:32.007 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:32.007 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:32.008 [ 0]:0x2 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4f047560bf6454794608196be7fd77b 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4f047560bf6454794608196be7fd77b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:32.008 18:01:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.575 18:01:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2315277 00:13:32.575 18:01:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:32.575 18:01:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.575 18:01:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2315277 /var/tmp/host.sock 00:13:32.575 18:01:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2315277 ']' 00:13:32.575 18:01:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:32.575 18:01:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.575 18:01:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:32.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:32.575 18:01:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.575 18:01:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:32.575 [2024-12-09 18:01:40.331458] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:13:32.575 [2024-12-09 18:01:40.331512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2315277 ] 00:13:32.575 [2024-12-09 18:01:40.423601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.575 [2024-12-09 18:01:40.464334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.510 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.510 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:33.510 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.510 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:33.768 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 65de727a-be0e-449c-ae8a-a7575a3cda06 00:13:33.768 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:33.768 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 65DE727ABE0E449CAE8AA7575A3CDA06 -i 00:13:33.768 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid e0b7b29a-5873-47a3-a56a-3eedab69d635 00:13:33.768 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:34.027 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g E0B7B29A587347A3A56A3EEDAB69D635 -i 00:13:34.027 18:01:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:34.285 18:01:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:34.543 18:01:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:34.543 18:01:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:34.801 nvme0n1 00:13:34.801 18:01:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:34.801 18:01:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:35.060 nvme1n2 00:13:35.060 18:01:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:35.060 18:01:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:35.060 18:01:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:35.060 18:01:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:35.060 18:01:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:35.318 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:35.318 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:35.318 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:35.318 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:35.576 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 65de727a-be0e-449c-ae8a-a7575a3cda06 == \6\5\d\e\7\2\7\a\-\b\e\0\e\-\4\4\9\c\-\a\e\8\a\-\a\7\5\7\5\a\3\c\d\a\0\6 ]] 00:13:35.576 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:35.576 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:35.576 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:35.576 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ e0b7b29a-5873-47a3-a56a-3eedab69d635 == \e\0\b\7\b\2\9\a\-\5\8\7\3\-\4\7\a\3\-\a\5\6\a\-\3\e\e\d\a\b\6\9\d\6\3\5 ]] 00:13:35.576 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.835 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:36.094 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 65de727a-be0e-449c-ae8a-a7575a3cda06 00:13:36.094 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:36.094 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 65DE727ABE0E449CAE8AA7575A3CDA06 00:13:36.094 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:36.094 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 65DE727ABE0E449CAE8AA7575A3CDA06 00:13:36.094 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:36.094 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:36.094 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:36.094 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:36.094 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:36.094 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:36.094 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:36.094 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:36.094 18:01:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 65DE727ABE0E449CAE8AA7575A3CDA06 00:13:36.094 [2024-12-09 18:01:44.055295] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:36.094 [2024-12-09 18:01:44.055329] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:36.094 [2024-12-09 18:01:44.055340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.094 request: 00:13:36.094 { 00:13:36.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:36.094 "namespace": { 00:13:36.094 "bdev_name": "invalid", 00:13:36.094 "nsid": 1, 00:13:36.094 "nguid": "65DE727ABE0E449CAE8AA7575A3CDA06", 00:13:36.094 "no_auto_visible": false, 00:13:36.094 "hide_metadata": false 00:13:36.094 }, 00:13:36.094 "method": "nvmf_subsystem_add_ns", 00:13:36.094 "req_id": 1 00:13:36.094 } 00:13:36.094 Got JSON-RPC error response 00:13:36.094 response: 00:13:36.094 { 00:13:36.094 "code": -32602, 00:13:36.094 "message": "Invalid parameters" 00:13:36.094 } 00:13:36.094 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:36.094 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:36.353 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:36.353 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:36.353 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 65de727a-be0e-449c-ae8a-a7575a3cda06 00:13:36.353 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:36.353 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 65DE727ABE0E449CAE8AA7575A3CDA06 -i 00:13:36.353 18:01:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:38.883 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:38.883 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:38.883 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:38.883 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:38.883 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2315277 00:13:38.883 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2315277 ']' 00:13:38.883 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2315277 00:13:38.883 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:38.883 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:38.883 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2315277 00:13:38.883 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:38.883 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:38.883 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2315277' 00:13:38.883 killing process with pid 2315277 00:13:38.883 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2315277 00:13:38.883 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2315277 00:13:38.883 18:01:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:39.142 rmmod nvme_rdma 00:13:39.142 rmmod nvme_fabrics 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2312948 ']' 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2312948 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2312948 ']' 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2312948 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.142 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2312948 00:13:39.401 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.401 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.401 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2312948' 00:13:39.401 killing process with pid 2312948 00:13:39.401 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2312948 00:13:39.401 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2312948 00:13:39.661 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:39.661 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:39.661 00:13:39.661 real 0m27.363s 00:13:39.661 user 0m34.251s 00:13:39.661 sys 0m8.132s 00:13:39.661 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.661 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:39.661 ************************************ 00:13:39.661 END TEST nvmf_ns_masking 00:13:39.661 ************************************ 00:13:39.661 18:01:47 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:39.661 18:01:47 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:13:39.661 18:01:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:39.661 18:01:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.661 18:01:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:39.661 ************************************ 00:13:39.661 START TEST nvmf_nvme_cli 00:13:39.661 ************************************ 00:13:39.661 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:13:39.661 * Looking for test storage... 00:13:39.661 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:39.661 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:39.661 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:13:39.661 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.920 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:39.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.920 --rc genhtml_branch_coverage=1 00:13:39.921 --rc genhtml_function_coverage=1 00:13:39.921 --rc genhtml_legend=1 00:13:39.921 --rc geninfo_all_blocks=1 00:13:39.921 --rc geninfo_unexecuted_blocks=1 00:13:39.921 00:13:39.921 ' 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:39.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.921 --rc genhtml_branch_coverage=1 00:13:39.921 --rc genhtml_function_coverage=1 00:13:39.921 --rc genhtml_legend=1 00:13:39.921 --rc geninfo_all_blocks=1 00:13:39.921 --rc geninfo_unexecuted_blocks=1 00:13:39.921 00:13:39.921 ' 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:39.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.921 --rc genhtml_branch_coverage=1 00:13:39.921 --rc genhtml_function_coverage=1 00:13:39.921 --rc genhtml_legend=1 00:13:39.921 --rc geninfo_all_blocks=1 00:13:39.921 --rc geninfo_unexecuted_blocks=1 00:13:39.921 00:13:39.921 ' 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:39.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.921 --rc genhtml_branch_coverage=1 00:13:39.921 --rc genhtml_function_coverage=1 00:13:39.921 --rc genhtml_legend=1 00:13:39.921 --rc geninfo_all_blocks=1 00:13:39.921 --rc geninfo_unexecuted_blocks=1 00:13:39.921 00:13:39.921 ' 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:39.921 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:39.921 18:01:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:48.046 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:48.046 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.046 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:48.047 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:48.047 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:48.047 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:48.047 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:48.047 altname enp217s0f0np0 00:13:48.047 altname ens818f0np0 00:13:48.047 inet 192.168.100.8/24 scope global mlx_0_0 00:13:48.047 valid_lft forever preferred_lft forever 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:48.047 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:48.047 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:48.047 altname enp217s0f1np1 00:13:48.047 altname ens818f1np1 00:13:48.047 inet 192.168.100.9/24 scope global mlx_0_1 00:13:48.047 valid_lft forever preferred_lft forever 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:13:48.047 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:48.048 192.168.100.9' 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:48.048 192.168.100.9' 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:48.048 192.168.100.9' 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2319983 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2319983 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2319983 ']' 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.048 18:01:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:48.048 [2024-12-09 18:01:55.046382] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:13:48.048 [2024-12-09 18:01:55.046441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.048 [2024-12-09 18:01:55.138985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:48.048 [2024-12-09 18:01:55.180465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.048 [2024-12-09 18:01:55.180505] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.048 [2024-12-09 18:01:55.180514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.048 [2024-12-09 18:01:55.180523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.048 [2024-12-09 18:01:55.180529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.048 [2024-12-09 18:01:55.182167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.048 [2024-12-09 18:01:55.182287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.048 [2024-12-09 18:01:55.182391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.048 [2024-12-09 18:01:55.182392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.048 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.048 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:48.048 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:48.048 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:48.048 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:48.048 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.048 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:48.048 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.048 18:01:55 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:48.048 [2024-12-09 18:01:55.960383] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1028980/0x102ce70) succeed. 00:13:48.048 [2024-12-09 18:01:55.969481] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x102a010/0x106e510) succeed. 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:48.306 Malloc0 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:48.306 Malloc1 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:48.306 [2024-12-09 18:01:56.183731] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.306 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:13:48.564 00:13:48.564 Discovery Log Number of Records 2, Generation counter 2 00:13:48.564 =====Discovery Log Entry 0====== 00:13:48.564 trtype: rdma 00:13:48.564 adrfam: ipv4 00:13:48.564 subtype: current discovery subsystem 00:13:48.564 treq: not required 00:13:48.564 portid: 0 00:13:48.564 trsvcid: 4420 00:13:48.564 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:48.564 traddr: 192.168.100.8 00:13:48.564 eflags: explicit discovery connections, duplicate discovery information 00:13:48.564 rdma_prtype: not specified 00:13:48.564 rdma_qptype: connected 00:13:48.564 rdma_cms: rdma-cm 00:13:48.564 rdma_pkey: 0x0000 00:13:48.564 =====Discovery Log Entry 1====== 00:13:48.564 trtype: rdma 00:13:48.564 adrfam: ipv4 00:13:48.564 subtype: nvme subsystem 00:13:48.564 treq: not required 00:13:48.564 portid: 0 00:13:48.564 trsvcid: 4420 00:13:48.564 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:48.564 traddr: 192.168.100.8 00:13:48.564 eflags: none 00:13:48.564 rdma_prtype: not specified 00:13:48.564 rdma_qptype: connected 00:13:48.564 rdma_cms: rdma-cm 00:13:48.564 rdma_pkey: 0x0000 00:13:48.564 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:48.564 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:48.564 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:48.564 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:48.564 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:48.564 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:48.564 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:48.564 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:48.564 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:48.564 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:48.564 18:01:56 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:49.497 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:49.497 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:49.497 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:49.497 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:49.497 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:49.497 18:01:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:51.397 /dev/nvme0n2 ]] 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:51.397 18:01:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:52.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:52.769 rmmod nvme_rdma 00:13:52.769 rmmod nvme_fabrics 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2319983 ']' 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2319983 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2319983 ']' 00:13:52.769 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2319983 00:13:52.770 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:52.770 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:52.770 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2319983 00:13:52.770 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:52.770 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:52.770 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2319983' 00:13:52.770 killing process with pid 2319983 00:13:52.770 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2319983 00:13:52.770 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2319983 00:13:53.029 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:53.029 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:53.029 00:13:53.029 real 0m13.352s 00:13:53.029 user 0m24.442s 00:13:53.029 sys 0m6.401s 00:13:53.029 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.030 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.030 ************************************ 00:13:53.030 END TEST nvmf_nvme_cli 00:13:53.030 ************************************ 00:13:53.030 18:02:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:13:53.030 18:02:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:13:53.030 18:02:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:53.030 18:02:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:53.030 18:02:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:53.030 ************************************ 00:13:53.030 START TEST nvmf_auth_target 00:13:53.030 ************************************ 00:13:53.030 18:02:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:13:53.030 * Looking for test storage... 00:13:53.290 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:53.290 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:53.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.290 --rc genhtml_branch_coverage=1 00:13:53.290 --rc genhtml_function_coverage=1 00:13:53.290 --rc genhtml_legend=1 00:13:53.290 --rc geninfo_all_blocks=1 00:13:53.290 --rc geninfo_unexecuted_blocks=1 00:13:53.290 00:13:53.290 ' 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:53.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.291 --rc genhtml_branch_coverage=1 00:13:53.291 --rc genhtml_function_coverage=1 00:13:53.291 --rc genhtml_legend=1 00:13:53.291 --rc geninfo_all_blocks=1 00:13:53.291 --rc geninfo_unexecuted_blocks=1 00:13:53.291 00:13:53.291 ' 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:53.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.291 --rc genhtml_branch_coverage=1 00:13:53.291 --rc genhtml_function_coverage=1 00:13:53.291 --rc genhtml_legend=1 00:13:53.291 --rc geninfo_all_blocks=1 00:13:53.291 --rc geninfo_unexecuted_blocks=1 00:13:53.291 00:13:53.291 ' 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:53.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.291 --rc genhtml_branch_coverage=1 00:13:53.291 --rc genhtml_function_coverage=1 00:13:53.291 --rc genhtml_legend=1 00:13:53.291 --rc geninfo_all_blocks=1 00:13:53.291 --rc geninfo_unexecuted_blocks=1 00:13:53.291 00:13:53.291 ' 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:53.291 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:53.291 18:02:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:01.416 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:01.417 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:01.417 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:01.417 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:01.417 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:01.417 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:01.417 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:01.417 altname enp217s0f0np0 00:14:01.417 altname ens818f0np0 00:14:01.417 inet 192.168.100.8/24 scope global mlx_0_0 00:14:01.417 valid_lft forever preferred_lft forever 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:01.417 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:01.417 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:01.417 altname enp217s0f1np1 00:14:01.417 altname ens818f1np1 00:14:01.417 inet 192.168.100.9/24 scope global mlx_0_1 00:14:01.417 valid_lft forever preferred_lft forever 00:14:01.417 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:01.418 192.168.100.9' 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:01.418 192.168.100.9' 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:01.418 192.168.100.9' 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2324318 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2324318 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2324318 ']' 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.418 18:02:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2324532 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=65516eec58138418ca4b28f4c1108cce5345190cc70e7e03 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.mqJ 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 65516eec58138418ca4b28f4c1108cce5345190cc70e7e03 0 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 65516eec58138418ca4b28f4c1108cce5345190cc70e7e03 0 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=65516eec58138418ca4b28f4c1108cce5345190cc70e7e03 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.mqJ 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.mqJ 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.mqJ 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:01.418 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e77bc86487fdcf3b00c7018ac18a31a3acde76997c50638e4e074a94a0de1037 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.mq7 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e77bc86487fdcf3b00c7018ac18a31a3acde76997c50638e4e074a94a0de1037 3 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e77bc86487fdcf3b00c7018ac18a31a3acde76997c50638e4e074a94a0de1037 3 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e77bc86487fdcf3b00c7018ac18a31a3acde76997c50638e4e074a94a0de1037 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.mq7 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.mq7 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.mq7 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=caa66bf45ed9f168c202f3a679ce9e49 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.fr7 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key caa66bf45ed9f168c202f3a679ce9e49 1 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 caa66bf45ed9f168c202f3a679ce9e49 1 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=caa66bf45ed9f168c202f3a679ce9e49 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.fr7 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.fr7 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.fr7 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ea78f68f0541c331b78fd6362f5928a5c93d40823cba8c81 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.SA8 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ea78f68f0541c331b78fd6362f5928a5c93d40823cba8c81 2 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ea78f68f0541c331b78fd6362f5928a5c93d40823cba8c81 2 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ea78f68f0541c331b78fd6362f5928a5c93d40823cba8c81 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.SA8 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.SA8 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.SA8 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6b6a3713c77c26ac6e8408e03dfaaf71a853c3d13582b092 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.BTU 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6b6a3713c77c26ac6e8408e03dfaaf71a853c3d13582b092 2 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6b6a3713c77c26ac6e8408e03dfaaf71a853c3d13582b092 2 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6b6a3713c77c26ac6e8408e03dfaaf71a853c3d13582b092 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.BTU 00:14:01.678 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.BTU 00:14:01.679 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.BTU 00:14:01.679 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:01.679 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:01.679 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:01.679 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:01.679 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:01.679 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:01.679 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:01.679 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7c79806f0b4cb3370ae5007aaa5be4b1 00:14:01.679 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.UOG 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7c79806f0b4cb3370ae5007aaa5be4b1 1 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7c79806f0b4cb3370ae5007aaa5be4b1 1 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7c79806f0b4cb3370ae5007aaa5be4b1 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.UOG 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.UOG 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.UOG 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=26ba77ee50a24665bca9aa1f077b97d3bbe3d2099ffb7fc6b402bead89b7961b 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.TfU 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 26ba77ee50a24665bca9aa1f077b97d3bbe3d2099ffb7fc6b402bead89b7961b 3 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 26ba77ee50a24665bca9aa1f077b97d3bbe3d2099ffb7fc6b402bead89b7961b 3 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=26ba77ee50a24665bca9aa1f077b97d3bbe3d2099ffb7fc6b402bead89b7961b 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.TfU 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.TfU 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.TfU 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2324318 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2324318 ']' 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.939 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.198 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.198 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:02.198 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2324532 /var/tmp/host.sock 00:14:02.198 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2324532 ']' 00:14:02.198 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:02.198 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.198 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:02.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:02.198 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.198 18:02:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.198 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.198 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:02.198 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:02.198 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.198 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.457 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.457 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:02.457 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.mqJ 00:14:02.457 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.457 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.457 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.457 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.mqJ 00:14:02.457 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.mqJ 00:14:02.717 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.mq7 ]] 00:14:02.717 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mq7 00:14:02.717 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.717 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.717 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.717 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mq7 00:14:02.717 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mq7 00:14:02.717 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:02.717 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.fr7 00:14:02.717 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.717 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.717 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.717 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.fr7 00:14:02.717 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.fr7 00:14:03.021 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.SA8 ]] 00:14:03.021 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SA8 00:14:03.021 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.021 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.021 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.021 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SA8 00:14:03.021 18:02:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SA8 00:14:03.279 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:03.279 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.BTU 00:14:03.279 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.279 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.279 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.279 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.BTU 00:14:03.279 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.BTU 00:14:03.279 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.UOG ]] 00:14:03.279 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UOG 00:14:03.279 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.279 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.279 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.279 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UOG 00:14:03.279 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UOG 00:14:03.537 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:03.537 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.TfU 00:14:03.537 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.537 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.537 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.537 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.TfU 00:14:03.537 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.TfU 00:14:03.795 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:03.795 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:03.795 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:03.795 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:03.796 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:03.796 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:04.054 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:04.054 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.054 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:04.054 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:04.054 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:04.054 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.054 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.054 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.054 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.054 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.054 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.054 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.054 18:02:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.314 00:14:04.314 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:04.314 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.314 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.573 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.573 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.573 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.573 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.573 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.573 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.573 { 00:14:04.573 "cntlid": 1, 00:14:04.573 "qid": 0, 00:14:04.573 "state": "enabled", 00:14:04.573 "thread": "nvmf_tgt_poll_group_000", 00:14:04.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:04.573 "listen_address": { 00:14:04.573 "trtype": "RDMA", 00:14:04.573 "adrfam": "IPv4", 00:14:04.573 "traddr": "192.168.100.8", 00:14:04.573 "trsvcid": "4420" 00:14:04.573 }, 00:14:04.573 "peer_address": { 00:14:04.573 "trtype": "RDMA", 00:14:04.573 "adrfam": "IPv4", 00:14:04.573 "traddr": "192.168.100.8", 00:14:04.573 "trsvcid": "54541" 00:14:04.573 }, 00:14:04.573 "auth": { 00:14:04.573 "state": "completed", 00:14:04.573 "digest": "sha256", 00:14:04.573 "dhgroup": "null" 00:14:04.573 } 00:14:04.573 } 00:14:04.573 ]' 00:14:04.573 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.573 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:04.573 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.573 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:04.573 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:04.573 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.573 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.573 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.832 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:14:04.832 18:02:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:14:05.400 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.659 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.917 00:14:05.917 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.917 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.918 18:02:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.177 18:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.177 18:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.177 18:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.177 18:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.177 18:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.177 18:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.177 { 00:14:06.177 "cntlid": 3, 00:14:06.177 "qid": 0, 00:14:06.177 "state": "enabled", 00:14:06.177 "thread": "nvmf_tgt_poll_group_000", 00:14:06.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:06.177 "listen_address": { 00:14:06.177 "trtype": "RDMA", 00:14:06.177 "adrfam": "IPv4", 00:14:06.177 "traddr": "192.168.100.8", 00:14:06.177 "trsvcid": "4420" 00:14:06.177 }, 00:14:06.177 "peer_address": { 00:14:06.177 "trtype": "RDMA", 00:14:06.177 "adrfam": "IPv4", 00:14:06.177 "traddr": "192.168.100.8", 00:14:06.177 "trsvcid": "57080" 00:14:06.177 }, 00:14:06.177 "auth": { 00:14:06.177 "state": "completed", 00:14:06.177 "digest": "sha256", 00:14:06.177 "dhgroup": "null" 00:14:06.177 } 00:14:06.177 } 00:14:06.177 ]' 00:14:06.177 18:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.177 18:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:06.177 18:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.436 18:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:06.436 18:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.436 18:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.436 18:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.436 18:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.436 18:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:14:06.436 18:02:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:14:07.121 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.390 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.648 00:14:07.648 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:07.648 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:07.648 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.908 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.908 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.908 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.908 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.908 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.908 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.908 { 00:14:07.908 "cntlid": 5, 00:14:07.908 "qid": 0, 00:14:07.908 "state": "enabled", 00:14:07.908 "thread": "nvmf_tgt_poll_group_000", 00:14:07.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:07.908 "listen_address": { 00:14:07.908 "trtype": "RDMA", 00:14:07.908 "adrfam": "IPv4", 00:14:07.908 "traddr": "192.168.100.8", 00:14:07.908 "trsvcid": "4420" 00:14:07.908 }, 00:14:07.908 "peer_address": { 00:14:07.908 "trtype": "RDMA", 00:14:07.908 "adrfam": "IPv4", 00:14:07.908 "traddr": "192.168.100.8", 00:14:07.908 "trsvcid": "50656" 00:14:07.908 }, 00:14:07.908 "auth": { 00:14:07.908 "state": "completed", 00:14:07.908 "digest": "sha256", 00:14:07.908 "dhgroup": "null" 00:14:07.908 } 00:14:07.908 } 00:14:07.908 ]' 00:14:07.908 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:07.908 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.908 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.167 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:08.167 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.167 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.167 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.167 18:02:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.167 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:14:08.167 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:14:09.103 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.103 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:09.103 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.103 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.103 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.103 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.103 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:09.103 18:02:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:09.103 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:09.103 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.103 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:09.103 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:09.103 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:09.103 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.103 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:09.103 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.103 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.103 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.103 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:09.103 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:09.103 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:09.362 00:14:09.362 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:09.362 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.362 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:09.621 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.621 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.621 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.621 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.621 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.621 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.621 { 00:14:09.621 "cntlid": 7, 00:14:09.621 "qid": 0, 00:14:09.621 "state": "enabled", 00:14:09.621 "thread": "nvmf_tgt_poll_group_000", 00:14:09.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:09.621 "listen_address": { 00:14:09.621 "trtype": "RDMA", 00:14:09.621 "adrfam": "IPv4", 00:14:09.621 "traddr": "192.168.100.8", 00:14:09.621 "trsvcid": "4420" 00:14:09.621 }, 00:14:09.621 "peer_address": { 00:14:09.621 "trtype": "RDMA", 00:14:09.621 "adrfam": "IPv4", 00:14:09.621 "traddr": "192.168.100.8", 00:14:09.621 "trsvcid": "43222" 00:14:09.621 }, 00:14:09.621 "auth": { 00:14:09.621 "state": "completed", 00:14:09.621 "digest": "sha256", 00:14:09.621 "dhgroup": "null" 00:14:09.621 } 00:14:09.621 } 00:14:09.621 ]' 00:14:09.621 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.621 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:09.621 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.880 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:09.880 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.880 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.880 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.880 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.139 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:14:10.139 18:02:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:14:10.707 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.707 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:10.707 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.707 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.707 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.707 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:10.707 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.707 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:10.707 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:10.966 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:10.966 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.966 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:10.966 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:10.966 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:10.967 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.967 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.967 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.967 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.967 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.967 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.967 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.967 18:02:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.226 00:14:11.226 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:11.226 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.226 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.485 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.485 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.485 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.485 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.485 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.485 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.485 { 00:14:11.485 "cntlid": 9, 00:14:11.485 "qid": 0, 00:14:11.485 "state": "enabled", 00:14:11.485 "thread": "nvmf_tgt_poll_group_000", 00:14:11.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:11.485 "listen_address": { 00:14:11.485 "trtype": "RDMA", 00:14:11.485 "adrfam": "IPv4", 00:14:11.485 "traddr": "192.168.100.8", 00:14:11.485 "trsvcid": "4420" 00:14:11.485 }, 00:14:11.485 "peer_address": { 00:14:11.485 "trtype": "RDMA", 00:14:11.485 "adrfam": "IPv4", 00:14:11.485 "traddr": "192.168.100.8", 00:14:11.485 "trsvcid": "35661" 00:14:11.485 }, 00:14:11.485 "auth": { 00:14:11.485 "state": "completed", 00:14:11.485 "digest": "sha256", 00:14:11.485 "dhgroup": "ffdhe2048" 00:14:11.485 } 00:14:11.485 } 00:14:11.485 ]' 00:14:11.485 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.485 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.485 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.485 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:11.485 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.485 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.485 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.485 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.743 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:14:11.743 18:02:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:14:12.310 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.569 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.827 00:14:12.827 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.827 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.827 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.085 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.085 18:02:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.085 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.085 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.085 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.085 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.085 { 00:14:13.085 "cntlid": 11, 00:14:13.085 "qid": 0, 00:14:13.085 "state": "enabled", 00:14:13.085 "thread": "nvmf_tgt_poll_group_000", 00:14:13.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:13.085 "listen_address": { 00:14:13.085 "trtype": "RDMA", 00:14:13.085 "adrfam": "IPv4", 00:14:13.085 "traddr": "192.168.100.8", 00:14:13.085 "trsvcid": "4420" 00:14:13.085 }, 00:14:13.085 "peer_address": { 00:14:13.085 "trtype": "RDMA", 00:14:13.085 "adrfam": "IPv4", 00:14:13.085 "traddr": "192.168.100.8", 00:14:13.085 "trsvcid": "37522" 00:14:13.085 }, 00:14:13.085 "auth": { 00:14:13.085 "state": "completed", 00:14:13.085 "digest": "sha256", 00:14:13.085 "dhgroup": "ffdhe2048" 00:14:13.085 } 00:14:13.085 } 00:14:13.085 ]' 00:14:13.085 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.085 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.343 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.343 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:13.343 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.343 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.343 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.343 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.602 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:14:13.602 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:14:14.170 18:02:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.170 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:14.170 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.170 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.170 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.170 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.170 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:14.170 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:14.429 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:14.429 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.429 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:14.429 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:14.429 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:14.429 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.429 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.429 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.429 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.429 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.429 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.429 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.430 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.689 00:14:14.689 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.689 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.689 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.948 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.948 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.948 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.948 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.948 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.948 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.948 { 00:14:14.948 "cntlid": 13, 00:14:14.948 "qid": 0, 00:14:14.948 "state": "enabled", 00:14:14.948 "thread": "nvmf_tgt_poll_group_000", 00:14:14.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:14.948 "listen_address": { 00:14:14.948 "trtype": "RDMA", 00:14:14.948 "adrfam": "IPv4", 00:14:14.948 "traddr": "192.168.100.8", 00:14:14.948 "trsvcid": "4420" 00:14:14.948 }, 00:14:14.948 "peer_address": { 00:14:14.948 "trtype": "RDMA", 00:14:14.948 "adrfam": "IPv4", 00:14:14.948 "traddr": "192.168.100.8", 00:14:14.948 "trsvcid": "33654" 00:14:14.948 }, 00:14:14.948 "auth": { 00:14:14.948 "state": "completed", 00:14:14.948 "digest": "sha256", 00:14:14.948 "dhgroup": "ffdhe2048" 00:14:14.948 } 00:14:14.948 } 00:14:14.948 ]' 00:14:14.948 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.948 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.948 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.948 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:14.948 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.948 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.948 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.948 18:02:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.207 18:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:14:15.207 18:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:14:15.775 18:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.034 18:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:16.034 18:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.034 18:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.034 18:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.034 18:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.034 18:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:16.034 18:02:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:16.293 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:16.293 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.293 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:16.293 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:16.293 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:16.293 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.293 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:16.293 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.293 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.293 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.293 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:16.293 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:16.293 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:16.552 00:14:16.552 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.552 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.552 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.552 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.552 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.552 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.552 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.552 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.552 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.552 { 00:14:16.552 "cntlid": 15, 00:14:16.552 "qid": 0, 00:14:16.552 "state": "enabled", 00:14:16.552 "thread": "nvmf_tgt_poll_group_000", 00:14:16.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:16.552 "listen_address": { 00:14:16.552 "trtype": "RDMA", 00:14:16.552 "adrfam": "IPv4", 00:14:16.552 "traddr": "192.168.100.8", 00:14:16.552 "trsvcid": "4420" 00:14:16.552 }, 00:14:16.552 "peer_address": { 00:14:16.552 "trtype": "RDMA", 00:14:16.552 "adrfam": "IPv4", 00:14:16.552 "traddr": "192.168.100.8", 00:14:16.552 "trsvcid": "37782" 00:14:16.552 }, 00:14:16.552 "auth": { 00:14:16.552 "state": "completed", 00:14:16.552 "digest": "sha256", 00:14:16.552 "dhgroup": "ffdhe2048" 00:14:16.552 } 00:14:16.552 } 00:14:16.552 ]' 00:14:16.552 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.811 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.811 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.811 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:16.811 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.811 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.811 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.811 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.070 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:14:17.070 18:02:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:14:17.635 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.636 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:17.636 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.636 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.636 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.636 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:17.636 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.636 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:17.636 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:17.895 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:17.895 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.895 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:17.895 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:17.895 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:17.895 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.895 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.895 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.895 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.895 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.895 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.895 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.895 18:02:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.154 00:14:18.154 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.154 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.154 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.413 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.413 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.413 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.413 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.413 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.413 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.413 { 00:14:18.413 "cntlid": 17, 00:14:18.413 "qid": 0, 00:14:18.413 "state": "enabled", 00:14:18.413 "thread": "nvmf_tgt_poll_group_000", 00:14:18.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:18.413 "listen_address": { 00:14:18.413 "trtype": "RDMA", 00:14:18.413 "adrfam": "IPv4", 00:14:18.413 "traddr": "192.168.100.8", 00:14:18.413 "trsvcid": "4420" 00:14:18.413 }, 00:14:18.413 "peer_address": { 00:14:18.413 "trtype": "RDMA", 00:14:18.413 "adrfam": "IPv4", 00:14:18.413 "traddr": "192.168.100.8", 00:14:18.413 "trsvcid": "44409" 00:14:18.413 }, 00:14:18.413 "auth": { 00:14:18.413 "state": "completed", 00:14:18.413 "digest": "sha256", 00:14:18.413 "dhgroup": "ffdhe3072" 00:14:18.413 } 00:14:18.413 } 00:14:18.413 ]' 00:14:18.413 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.413 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.413 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.413 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:18.413 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.672 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.672 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.672 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.672 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:14:18.673 18:02:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.613 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.873 00:14:19.873 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.873 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.873 18:02:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.132 18:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.133 18:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.133 18:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.133 18:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.133 18:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.133 18:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:20.133 { 00:14:20.133 "cntlid": 19, 00:14:20.133 "qid": 0, 00:14:20.133 "state": "enabled", 00:14:20.133 "thread": "nvmf_tgt_poll_group_000", 00:14:20.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:20.133 "listen_address": { 00:14:20.133 "trtype": "RDMA", 00:14:20.133 "adrfam": "IPv4", 00:14:20.133 "traddr": "192.168.100.8", 00:14:20.133 "trsvcid": "4420" 00:14:20.133 }, 00:14:20.133 "peer_address": { 00:14:20.133 "trtype": "RDMA", 00:14:20.133 "adrfam": "IPv4", 00:14:20.133 "traddr": "192.168.100.8", 00:14:20.133 "trsvcid": "56457" 00:14:20.133 }, 00:14:20.133 "auth": { 00:14:20.133 "state": "completed", 00:14:20.133 "digest": "sha256", 00:14:20.133 "dhgroup": "ffdhe3072" 00:14:20.133 } 00:14:20.133 } 00:14:20.133 ]' 00:14:20.133 18:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:20.133 18:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.133 18:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.133 18:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:20.133 18:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.392 18:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.392 18:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.392 18:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.392 18:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:14:20.392 18:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:14:21.329 18:02:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.329 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.588 00:14:21.588 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.588 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.588 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.846 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.846 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.846 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.846 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.846 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.846 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.846 { 00:14:21.846 "cntlid": 21, 00:14:21.846 "qid": 0, 00:14:21.846 "state": "enabled", 00:14:21.846 "thread": "nvmf_tgt_poll_group_000", 00:14:21.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:21.846 "listen_address": { 00:14:21.846 "trtype": "RDMA", 00:14:21.846 "adrfam": "IPv4", 00:14:21.846 "traddr": "192.168.100.8", 00:14:21.847 "trsvcid": "4420" 00:14:21.847 }, 00:14:21.847 "peer_address": { 00:14:21.847 "trtype": "RDMA", 00:14:21.847 "adrfam": "IPv4", 00:14:21.847 "traddr": "192.168.100.8", 00:14:21.847 "trsvcid": "37547" 00:14:21.847 }, 00:14:21.847 "auth": { 00:14:21.847 "state": "completed", 00:14:21.847 "digest": "sha256", 00:14:21.847 "dhgroup": "ffdhe3072" 00:14:21.847 } 00:14:21.847 } 00:14:21.847 ]' 00:14:21.847 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.847 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:21.847 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.105 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:22.105 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.105 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.105 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.105 18:02:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.364 18:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:14:22.364 18:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:14:22.931 18:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.931 18:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:22.931 18:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.931 18:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.931 18:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.931 18:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.931 18:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:22.931 18:02:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:23.190 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:23.190 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.190 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:23.190 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:23.190 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:23.190 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.190 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:23.190 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.190 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.190 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.190 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:23.190 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:23.190 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:23.448 00:14:23.448 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.448 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.448 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.707 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.707 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.707 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.707 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.707 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.707 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.707 { 00:14:23.707 "cntlid": 23, 00:14:23.707 "qid": 0, 00:14:23.707 "state": "enabled", 00:14:23.707 "thread": "nvmf_tgt_poll_group_000", 00:14:23.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:23.707 "listen_address": { 00:14:23.707 "trtype": "RDMA", 00:14:23.707 "adrfam": "IPv4", 00:14:23.707 "traddr": "192.168.100.8", 00:14:23.707 "trsvcid": "4420" 00:14:23.707 }, 00:14:23.707 "peer_address": { 00:14:23.707 "trtype": "RDMA", 00:14:23.707 "adrfam": "IPv4", 00:14:23.707 "traddr": "192.168.100.8", 00:14:23.707 "trsvcid": "34617" 00:14:23.707 }, 00:14:23.707 "auth": { 00:14:23.707 "state": "completed", 00:14:23.707 "digest": "sha256", 00:14:23.707 "dhgroup": "ffdhe3072" 00:14:23.707 } 00:14:23.707 } 00:14:23.707 ]' 00:14:23.707 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.707 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:23.707 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.707 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:23.707 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.707 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.707 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.707 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.966 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:14:23.966 18:02:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:14:24.533 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.792 18:02:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.051 00:14:25.051 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.051 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.051 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.310 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.310 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.310 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.310 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.310 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.310 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.310 { 00:14:25.310 "cntlid": 25, 00:14:25.310 "qid": 0, 00:14:25.310 "state": "enabled", 00:14:25.310 "thread": "nvmf_tgt_poll_group_000", 00:14:25.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:25.310 "listen_address": { 00:14:25.310 "trtype": "RDMA", 00:14:25.310 "adrfam": "IPv4", 00:14:25.310 "traddr": "192.168.100.8", 00:14:25.310 "trsvcid": "4420" 00:14:25.310 }, 00:14:25.310 "peer_address": { 00:14:25.310 "trtype": "RDMA", 00:14:25.310 "adrfam": "IPv4", 00:14:25.310 "traddr": "192.168.100.8", 00:14:25.311 "trsvcid": "33268" 00:14:25.311 }, 00:14:25.311 "auth": { 00:14:25.311 "state": "completed", 00:14:25.311 "digest": "sha256", 00:14:25.311 "dhgroup": "ffdhe4096" 00:14:25.311 } 00:14:25.311 } 00:14:25.311 ]' 00:14:25.311 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.311 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:25.311 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.569 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:25.569 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.569 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.569 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.569 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.828 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:14:25.828 18:02:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:14:26.395 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.395 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:26.395 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.395 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.395 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.395 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.395 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:26.395 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:26.654 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:26.654 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.654 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:26.654 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:26.654 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:26.654 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.654 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.654 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.654 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.654 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.654 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.654 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.654 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.913 00:14:26.913 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.913 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.913 18:02:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.171 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.171 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.171 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.171 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.171 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.171 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.171 { 00:14:27.171 "cntlid": 27, 00:14:27.171 "qid": 0, 00:14:27.171 "state": "enabled", 00:14:27.171 "thread": "nvmf_tgt_poll_group_000", 00:14:27.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:27.171 "listen_address": { 00:14:27.171 "trtype": "RDMA", 00:14:27.171 "adrfam": "IPv4", 00:14:27.171 "traddr": "192.168.100.8", 00:14:27.171 "trsvcid": "4420" 00:14:27.171 }, 00:14:27.171 "peer_address": { 00:14:27.171 "trtype": "RDMA", 00:14:27.171 "adrfam": "IPv4", 00:14:27.171 "traddr": "192.168.100.8", 00:14:27.171 "trsvcid": "48630" 00:14:27.171 }, 00:14:27.171 "auth": { 00:14:27.171 "state": "completed", 00:14:27.171 "digest": "sha256", 00:14:27.171 "dhgroup": "ffdhe4096" 00:14:27.171 } 00:14:27.171 } 00:14:27.171 ]' 00:14:27.171 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.171 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:27.171 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.172 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:27.172 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.172 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.172 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.172 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.429 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:14:27.429 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:14:27.995 18:02:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.254 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:28.254 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.254 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.254 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.254 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.254 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:28.254 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:28.513 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:28.513 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.513 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:28.513 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:28.513 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:28.513 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.513 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.513 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.513 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.513 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.513 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.513 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.513 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.771 00:14:28.771 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.771 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.771 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.029 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.029 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.029 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.029 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.029 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.029 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.029 { 00:14:29.029 "cntlid": 29, 00:14:29.029 "qid": 0, 00:14:29.029 "state": "enabled", 00:14:29.029 "thread": "nvmf_tgt_poll_group_000", 00:14:29.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:29.029 "listen_address": { 00:14:29.029 "trtype": "RDMA", 00:14:29.029 "adrfam": "IPv4", 00:14:29.029 "traddr": "192.168.100.8", 00:14:29.029 "trsvcid": "4420" 00:14:29.029 }, 00:14:29.029 "peer_address": { 00:14:29.029 "trtype": "RDMA", 00:14:29.029 "adrfam": "IPv4", 00:14:29.029 "traddr": "192.168.100.8", 00:14:29.029 "trsvcid": "33398" 00:14:29.029 }, 00:14:29.029 "auth": { 00:14:29.029 "state": "completed", 00:14:29.029 "digest": "sha256", 00:14:29.029 "dhgroup": "ffdhe4096" 00:14:29.029 } 00:14:29.029 } 00:14:29.029 ]' 00:14:29.029 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.029 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.029 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.029 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:29.029 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.029 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.029 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.029 18:02:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.288 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:14:29.288 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:14:29.855 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.113 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:30.113 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.113 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.113 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.113 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.113 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:30.113 18:02:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:30.113 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:30.113 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.113 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:30.113 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:30.113 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:30.113 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.113 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:30.113 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.113 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.113 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.113 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:30.113 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:30.113 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:30.371 00:14:30.371 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.371 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.371 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.629 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.629 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.629 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.629 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.629 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.629 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.629 { 00:14:30.629 "cntlid": 31, 00:14:30.629 "qid": 0, 00:14:30.629 "state": "enabled", 00:14:30.629 "thread": "nvmf_tgt_poll_group_000", 00:14:30.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:30.629 "listen_address": { 00:14:30.629 "trtype": "RDMA", 00:14:30.629 "adrfam": "IPv4", 00:14:30.629 "traddr": "192.168.100.8", 00:14:30.629 "trsvcid": "4420" 00:14:30.629 }, 00:14:30.629 "peer_address": { 00:14:30.629 "trtype": "RDMA", 00:14:30.629 "adrfam": "IPv4", 00:14:30.629 "traddr": "192.168.100.8", 00:14:30.629 "trsvcid": "57938" 00:14:30.629 }, 00:14:30.629 "auth": { 00:14:30.629 "state": "completed", 00:14:30.629 "digest": "sha256", 00:14:30.629 "dhgroup": "ffdhe4096" 00:14:30.629 } 00:14:30.629 } 00:14:30.629 ]' 00:14:30.629 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.630 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.630 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.968 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:30.968 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.968 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.968 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.968 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.968 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:14:30.968 18:02:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:14:31.535 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.823 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:31.823 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.823 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.823 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.823 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:31.823 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.823 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:31.823 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:32.082 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:32.082 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.082 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:32.082 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:32.082 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:32.082 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.082 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.082 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.082 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.082 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.082 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.082 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.082 18:02:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.341 00:14:32.341 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.341 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.341 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.600 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.600 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.600 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.600 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.600 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.600 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.600 { 00:14:32.600 "cntlid": 33, 00:14:32.600 "qid": 0, 00:14:32.600 "state": "enabled", 00:14:32.600 "thread": "nvmf_tgt_poll_group_000", 00:14:32.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:32.600 "listen_address": { 00:14:32.600 "trtype": "RDMA", 00:14:32.600 "adrfam": "IPv4", 00:14:32.600 "traddr": "192.168.100.8", 00:14:32.600 "trsvcid": "4420" 00:14:32.600 }, 00:14:32.600 "peer_address": { 00:14:32.600 "trtype": "RDMA", 00:14:32.600 "adrfam": "IPv4", 00:14:32.600 "traddr": "192.168.100.8", 00:14:32.600 "trsvcid": "48563" 00:14:32.600 }, 00:14:32.600 "auth": { 00:14:32.600 "state": "completed", 00:14:32.600 "digest": "sha256", 00:14:32.600 "dhgroup": "ffdhe6144" 00:14:32.600 } 00:14:32.600 } 00:14:32.600 ]' 00:14:32.600 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.600 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.600 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.600 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:32.600 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.600 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.600 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.600 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.859 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:14:32.859 18:02:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:14:33.427 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.686 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:33.686 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.686 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.686 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.686 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.686 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:33.686 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:33.686 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:33.686 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.686 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:33.686 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:33.686 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:33.686 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.686 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.686 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.686 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.944 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.944 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.944 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.944 18:02:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.203 00:14:34.203 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.203 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.203 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.462 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.462 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.462 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.462 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.462 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.462 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.462 { 00:14:34.462 "cntlid": 35, 00:14:34.462 "qid": 0, 00:14:34.462 "state": "enabled", 00:14:34.462 "thread": "nvmf_tgt_poll_group_000", 00:14:34.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:34.462 "listen_address": { 00:14:34.462 "trtype": "RDMA", 00:14:34.462 "adrfam": "IPv4", 00:14:34.462 "traddr": "192.168.100.8", 00:14:34.462 "trsvcid": "4420" 00:14:34.462 }, 00:14:34.462 "peer_address": { 00:14:34.462 "trtype": "RDMA", 00:14:34.462 "adrfam": "IPv4", 00:14:34.463 "traddr": "192.168.100.8", 00:14:34.463 "trsvcid": "40755" 00:14:34.463 }, 00:14:34.463 "auth": { 00:14:34.463 "state": "completed", 00:14:34.463 "digest": "sha256", 00:14:34.463 "dhgroup": "ffdhe6144" 00:14:34.463 } 00:14:34.463 } 00:14:34.463 ]' 00:14:34.463 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.463 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.463 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.463 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:34.463 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.463 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.463 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.463 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.722 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:14:34.722 18:02:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:14:35.289 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.547 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:35.547 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.547 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.547 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.547 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.547 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:35.547 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:35.547 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:35.547 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.547 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:35.547 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:35.547 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:35.547 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.547 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.547 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.548 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.548 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.548 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.548 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.548 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.114 00:14:36.114 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.114 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.114 18:02:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.114 18:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.114 18:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.114 18:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.114 18:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.114 18:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.114 18:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.114 { 00:14:36.114 "cntlid": 37, 00:14:36.114 "qid": 0, 00:14:36.114 "state": "enabled", 00:14:36.114 "thread": "nvmf_tgt_poll_group_000", 00:14:36.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:36.114 "listen_address": { 00:14:36.114 "trtype": "RDMA", 00:14:36.114 "adrfam": "IPv4", 00:14:36.114 "traddr": "192.168.100.8", 00:14:36.114 "trsvcid": "4420" 00:14:36.114 }, 00:14:36.114 "peer_address": { 00:14:36.114 "trtype": "RDMA", 00:14:36.114 "adrfam": "IPv4", 00:14:36.114 "traddr": "192.168.100.8", 00:14:36.114 "trsvcid": "55032" 00:14:36.114 }, 00:14:36.114 "auth": { 00:14:36.114 "state": "completed", 00:14:36.114 "digest": "sha256", 00:14:36.114 "dhgroup": "ffdhe6144" 00:14:36.114 } 00:14:36.114 } 00:14:36.114 ]' 00:14:36.114 18:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.373 18:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.373 18:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.373 18:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:36.373 18:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.373 18:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.373 18:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.373 18:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.631 18:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:14:36.632 18:02:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:14:37.199 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.199 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:37.199 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.199 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.199 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.199 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.199 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:37.199 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:37.457 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:37.457 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.457 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:37.457 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:37.457 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:37.457 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.457 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:37.457 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.457 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.457 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.457 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:37.457 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:37.457 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:37.715 00:14:37.974 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.974 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.974 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.974 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.974 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.974 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.974 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.974 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.974 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.974 { 00:14:37.974 "cntlid": 39, 00:14:37.974 "qid": 0, 00:14:37.974 "state": "enabled", 00:14:37.974 "thread": "nvmf_tgt_poll_group_000", 00:14:37.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:37.974 "listen_address": { 00:14:37.974 "trtype": "RDMA", 00:14:37.974 "adrfam": "IPv4", 00:14:37.974 "traddr": "192.168.100.8", 00:14:37.974 "trsvcid": "4420" 00:14:37.974 }, 00:14:37.974 "peer_address": { 00:14:37.974 "trtype": "RDMA", 00:14:37.974 "adrfam": "IPv4", 00:14:37.974 "traddr": "192.168.100.8", 00:14:37.974 "trsvcid": "53287" 00:14:37.974 }, 00:14:37.974 "auth": { 00:14:37.974 "state": "completed", 00:14:37.974 "digest": "sha256", 00:14:37.974 "dhgroup": "ffdhe6144" 00:14:37.974 } 00:14:37.974 } 00:14:37.974 ]' 00:14:37.974 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.974 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.974 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.238 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:38.238 18:02:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.238 18:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.238 18:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.238 18:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.496 18:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:14:38.496 18:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:14:39.063 18:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.063 18:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:39.063 18:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.063 18:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.063 18:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.063 18:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:39.063 18:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.063 18:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:39.063 18:02:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:39.321 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:39.321 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.321 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:39.321 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:39.321 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:39.321 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.321 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.321 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.321 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.321 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.321 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.321 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.321 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.890 00:14:39.890 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.890 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.890 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.890 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.890 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.890 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.890 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.150 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.150 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.150 { 00:14:40.150 "cntlid": 41, 00:14:40.150 "qid": 0, 00:14:40.150 "state": "enabled", 00:14:40.150 "thread": "nvmf_tgt_poll_group_000", 00:14:40.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:40.150 "listen_address": { 00:14:40.150 "trtype": "RDMA", 00:14:40.150 "adrfam": "IPv4", 00:14:40.150 "traddr": "192.168.100.8", 00:14:40.150 "trsvcid": "4420" 00:14:40.150 }, 00:14:40.150 "peer_address": { 00:14:40.150 "trtype": "RDMA", 00:14:40.150 "adrfam": "IPv4", 00:14:40.150 "traddr": "192.168.100.8", 00:14:40.150 "trsvcid": "56436" 00:14:40.150 }, 00:14:40.150 "auth": { 00:14:40.150 "state": "completed", 00:14:40.150 "digest": "sha256", 00:14:40.150 "dhgroup": "ffdhe8192" 00:14:40.150 } 00:14:40.150 } 00:14:40.150 ]' 00:14:40.150 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.150 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.150 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.150 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:40.150 18:02:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.150 18:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.150 18:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.150 18:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.409 18:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:14:40.409 18:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:14:40.977 18:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.977 18:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:40.977 18:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.977 18:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.977 18:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.977 18:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.977 18:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:40.977 18:02:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:41.236 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:41.236 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.236 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:41.236 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:41.236 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:41.236 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.236 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.237 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.237 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.237 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.237 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.237 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.237 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.804 00:14:41.804 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.804 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.804 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.062 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.062 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.062 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.062 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.062 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.062 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.062 { 00:14:42.062 "cntlid": 43, 00:14:42.062 "qid": 0, 00:14:42.063 "state": "enabled", 00:14:42.063 "thread": "nvmf_tgt_poll_group_000", 00:14:42.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:42.063 "listen_address": { 00:14:42.063 "trtype": "RDMA", 00:14:42.063 "adrfam": "IPv4", 00:14:42.063 "traddr": "192.168.100.8", 00:14:42.063 "trsvcid": "4420" 00:14:42.063 }, 00:14:42.063 "peer_address": { 00:14:42.063 "trtype": "RDMA", 00:14:42.063 "adrfam": "IPv4", 00:14:42.063 "traddr": "192.168.100.8", 00:14:42.063 "trsvcid": "60239" 00:14:42.063 }, 00:14:42.063 "auth": { 00:14:42.063 "state": "completed", 00:14:42.063 "digest": "sha256", 00:14:42.063 "dhgroup": "ffdhe8192" 00:14:42.063 } 00:14:42.063 } 00:14:42.063 ]' 00:14:42.063 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.063 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.063 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.063 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:42.063 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.063 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.063 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.063 18:02:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.322 18:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:14:42.322 18:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:14:42.888 18:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.146 18:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:43.146 18:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.146 18:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.146 18:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.146 18:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.146 18:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:43.146 18:02:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:43.404 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:43.404 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.404 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:43.404 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:43.404 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:43.404 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.404 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.404 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.405 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.405 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.405 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.405 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.405 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.663 00:14:43.663 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.663 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.663 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.922 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.922 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.922 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.922 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.922 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.922 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.922 { 00:14:43.922 "cntlid": 45, 00:14:43.922 "qid": 0, 00:14:43.922 "state": "enabled", 00:14:43.922 "thread": "nvmf_tgt_poll_group_000", 00:14:43.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:43.922 "listen_address": { 00:14:43.922 "trtype": "RDMA", 00:14:43.922 "adrfam": "IPv4", 00:14:43.922 "traddr": "192.168.100.8", 00:14:43.922 "trsvcid": "4420" 00:14:43.922 }, 00:14:43.922 "peer_address": { 00:14:43.922 "trtype": "RDMA", 00:14:43.922 "adrfam": "IPv4", 00:14:43.922 "traddr": "192.168.100.8", 00:14:43.922 "trsvcid": "37836" 00:14:43.922 }, 00:14:43.922 "auth": { 00:14:43.922 "state": "completed", 00:14:43.922 "digest": "sha256", 00:14:43.922 "dhgroup": "ffdhe8192" 00:14:43.922 } 00:14:43.922 } 00:14:43.922 ]' 00:14:43.922 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.922 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.922 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.180 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:44.180 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.180 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.180 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.180 18:02:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.180 18:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:14:44.180 18:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:14:45.116 18:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.116 18:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:45.116 18:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.116 18:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.116 18:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.116 18:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.116 18:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:45.116 18:02:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:45.116 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:45.116 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.116 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:45.116 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:45.116 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:45.116 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.116 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:45.116 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.117 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.117 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.117 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:45.117 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:45.117 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:45.684 00:14:45.684 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.684 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.684 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.943 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.943 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.943 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.943 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.943 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.943 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:45.943 { 00:14:45.943 "cntlid": 47, 00:14:45.943 "qid": 0, 00:14:45.943 "state": "enabled", 00:14:45.943 "thread": "nvmf_tgt_poll_group_000", 00:14:45.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:45.943 "listen_address": { 00:14:45.943 "trtype": "RDMA", 00:14:45.943 "adrfam": "IPv4", 00:14:45.943 "traddr": "192.168.100.8", 00:14:45.943 "trsvcid": "4420" 00:14:45.943 }, 00:14:45.943 "peer_address": { 00:14:45.943 "trtype": "RDMA", 00:14:45.943 "adrfam": "IPv4", 00:14:45.943 "traddr": "192.168.100.8", 00:14:45.943 "trsvcid": "41317" 00:14:45.943 }, 00:14:45.943 "auth": { 00:14:45.943 "state": "completed", 00:14:45.943 "digest": "sha256", 00:14:45.943 "dhgroup": "ffdhe8192" 00:14:45.943 } 00:14:45.943 } 00:14:45.943 ]' 00:14:45.943 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:45.943 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.943 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.944 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:45.944 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.944 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.944 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.944 18:02:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.203 18:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:14:46.203 18:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:14:46.770 18:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.029 18:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:47.029 18:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.029 18:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.029 18:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.029 18:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:47.029 18:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:47.029 18:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.029 18:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:47.029 18:02:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:47.287 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:14:47.287 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.287 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:47.287 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:47.287 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:47.287 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.287 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.287 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.287 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.287 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.287 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.287 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.287 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.287 00:14:47.545 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.545 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.545 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.545 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.545 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.545 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.545 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.545 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.545 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.545 { 00:14:47.545 "cntlid": 49, 00:14:47.545 "qid": 0, 00:14:47.545 "state": "enabled", 00:14:47.545 "thread": "nvmf_tgt_poll_group_000", 00:14:47.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:47.545 "listen_address": { 00:14:47.545 "trtype": "RDMA", 00:14:47.545 "adrfam": "IPv4", 00:14:47.545 "traddr": "192.168.100.8", 00:14:47.545 "trsvcid": "4420" 00:14:47.545 }, 00:14:47.545 "peer_address": { 00:14:47.545 "trtype": "RDMA", 00:14:47.545 "adrfam": "IPv4", 00:14:47.545 "traddr": "192.168.100.8", 00:14:47.545 "trsvcid": "40579" 00:14:47.545 }, 00:14:47.545 "auth": { 00:14:47.545 "state": "completed", 00:14:47.545 "digest": "sha384", 00:14:47.545 "dhgroup": "null" 00:14:47.545 } 00:14:47.545 } 00:14:47.545 ]' 00:14:47.545 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.545 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:47.545 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.803 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:47.803 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.803 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.803 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.803 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.062 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:14:48.062 18:02:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:14:48.629 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.629 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:48.629 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.629 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.629 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.629 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.629 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:48.629 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:48.888 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:14:48.888 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.888 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:48.888 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:48.888 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:48.888 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.888 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.888 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.888 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.888 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.888 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.888 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.888 18:02:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.146 00:14:49.146 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.146 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.146 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.405 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.405 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.405 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.405 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.405 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.405 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.405 { 00:14:49.405 "cntlid": 51, 00:14:49.405 "qid": 0, 00:14:49.405 "state": "enabled", 00:14:49.405 "thread": "nvmf_tgt_poll_group_000", 00:14:49.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:49.405 "listen_address": { 00:14:49.405 "trtype": "RDMA", 00:14:49.405 "adrfam": "IPv4", 00:14:49.405 "traddr": "192.168.100.8", 00:14:49.405 "trsvcid": "4420" 00:14:49.405 }, 00:14:49.405 "peer_address": { 00:14:49.405 "trtype": "RDMA", 00:14:49.405 "adrfam": "IPv4", 00:14:49.405 "traddr": "192.168.100.8", 00:14:49.405 "trsvcid": "35072" 00:14:49.405 }, 00:14:49.405 "auth": { 00:14:49.405 "state": "completed", 00:14:49.405 "digest": "sha384", 00:14:49.405 "dhgroup": "null" 00:14:49.405 } 00:14:49.405 } 00:14:49.405 ]' 00:14:49.405 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.405 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:49.405 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.405 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:49.405 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.405 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.405 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.405 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.664 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:14:49.664 18:02:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:14:50.230 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.488 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:50.488 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.488 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.488 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.488 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.488 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:50.488 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:50.747 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:14:50.747 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.747 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:50.747 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:50.747 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:50.747 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.747 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.747 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.747 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.747 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.747 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.747 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.747 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.005 00:14:51.005 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.006 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.006 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.006 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.006 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.006 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.006 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.264 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.264 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.264 { 00:14:51.264 "cntlid": 53, 00:14:51.264 "qid": 0, 00:14:51.264 "state": "enabled", 00:14:51.264 "thread": "nvmf_tgt_poll_group_000", 00:14:51.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:51.264 "listen_address": { 00:14:51.264 "trtype": "RDMA", 00:14:51.264 "adrfam": "IPv4", 00:14:51.264 "traddr": "192.168.100.8", 00:14:51.264 "trsvcid": "4420" 00:14:51.264 }, 00:14:51.264 "peer_address": { 00:14:51.264 "trtype": "RDMA", 00:14:51.264 "adrfam": "IPv4", 00:14:51.264 "traddr": "192.168.100.8", 00:14:51.264 "trsvcid": "47191" 00:14:51.264 }, 00:14:51.264 "auth": { 00:14:51.264 "state": "completed", 00:14:51.264 "digest": "sha384", 00:14:51.264 "dhgroup": "null" 00:14:51.264 } 00:14:51.264 } 00:14:51.264 ]' 00:14:51.264 18:02:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.264 18:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:51.264 18:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.264 18:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:51.264 18:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.264 18:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.264 18:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.264 18:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.523 18:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:14:51.523 18:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:14:52.091 18:02:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.091 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:52.091 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.091 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.349 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.349 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.349 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:52.349 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:52.349 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:14:52.349 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.349 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:52.349 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:52.349 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:52.349 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.349 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:52.349 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.349 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.349 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.349 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:52.349 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.349 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.607 00:14:52.608 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.608 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.608 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.866 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.866 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.866 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.866 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.866 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.866 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.866 { 00:14:52.866 "cntlid": 55, 00:14:52.866 "qid": 0, 00:14:52.866 "state": "enabled", 00:14:52.866 "thread": "nvmf_tgt_poll_group_000", 00:14:52.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:52.866 "listen_address": { 00:14:52.866 "trtype": "RDMA", 00:14:52.866 "adrfam": "IPv4", 00:14:52.866 "traddr": "192.168.100.8", 00:14:52.866 "trsvcid": "4420" 00:14:52.866 }, 00:14:52.866 "peer_address": { 00:14:52.866 "trtype": "RDMA", 00:14:52.866 "adrfam": "IPv4", 00:14:52.866 "traddr": "192.168.100.8", 00:14:52.866 "trsvcid": "49451" 00:14:52.866 }, 00:14:52.866 "auth": { 00:14:52.866 "state": "completed", 00:14:52.866 "digest": "sha384", 00:14:52.866 "dhgroup": "null" 00:14:52.866 } 00:14:52.866 } 00:14:52.866 ]' 00:14:52.866 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.866 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:52.866 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.866 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:52.866 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.125 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.125 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.125 18:03:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.125 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:14:53.125 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:14:53.692 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.951 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:53.951 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.951 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.951 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.951 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:53.951 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.952 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:53.952 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:54.211 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:14:54.211 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.211 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:54.211 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:54.211 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:54.211 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.211 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.211 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.211 18:03:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.211 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.211 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.211 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.211 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.470 00:14:54.470 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.470 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.470 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.729 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.729 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.729 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.729 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.729 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.729 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.729 { 00:14:54.729 "cntlid": 57, 00:14:54.729 "qid": 0, 00:14:54.729 "state": "enabled", 00:14:54.729 "thread": "nvmf_tgt_poll_group_000", 00:14:54.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:54.729 "listen_address": { 00:14:54.729 "trtype": "RDMA", 00:14:54.729 "adrfam": "IPv4", 00:14:54.729 "traddr": "192.168.100.8", 00:14:54.729 "trsvcid": "4420" 00:14:54.729 }, 00:14:54.729 "peer_address": { 00:14:54.729 "trtype": "RDMA", 00:14:54.729 "adrfam": "IPv4", 00:14:54.729 "traddr": "192.168.100.8", 00:14:54.729 "trsvcid": "54426" 00:14:54.729 }, 00:14:54.729 "auth": { 00:14:54.729 "state": "completed", 00:14:54.729 "digest": "sha384", 00:14:54.729 "dhgroup": "ffdhe2048" 00:14:54.729 } 00:14:54.729 } 00:14:54.729 ]' 00:14:54.729 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.729 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:54.729 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.729 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:54.729 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.729 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.729 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.729 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.988 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:14:54.988 18:03:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:14:55.555 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.814 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:55.814 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.814 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.814 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.814 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.814 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:55.814 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:55.814 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:14:55.814 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.814 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:55.814 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:55.814 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:55.814 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.814 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.814 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.815 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.815 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.815 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.815 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.815 18:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.073 00:14:56.073 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.073 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.073 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.332 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.332 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.332 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.332 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.332 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.332 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.332 { 00:14:56.332 "cntlid": 59, 00:14:56.332 "qid": 0, 00:14:56.332 "state": "enabled", 00:14:56.332 "thread": "nvmf_tgt_poll_group_000", 00:14:56.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:56.332 "listen_address": { 00:14:56.332 "trtype": "RDMA", 00:14:56.332 "adrfam": "IPv4", 00:14:56.332 "traddr": "192.168.100.8", 00:14:56.332 "trsvcid": "4420" 00:14:56.332 }, 00:14:56.332 "peer_address": { 00:14:56.332 "trtype": "RDMA", 00:14:56.332 "adrfam": "IPv4", 00:14:56.332 "traddr": "192.168.100.8", 00:14:56.332 "trsvcid": "56546" 00:14:56.332 }, 00:14:56.332 "auth": { 00:14:56.332 "state": "completed", 00:14:56.332 "digest": "sha384", 00:14:56.332 "dhgroup": "ffdhe2048" 00:14:56.332 } 00:14:56.332 } 00:14:56.332 ]' 00:14:56.332 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.332 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:56.332 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.332 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:56.332 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.590 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.590 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.590 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.591 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:14:56.591 18:03:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.526 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.785 00:14:58.044 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.044 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.044 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.044 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.044 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.044 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.044 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.044 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.044 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.044 { 00:14:58.044 "cntlid": 61, 00:14:58.044 "qid": 0, 00:14:58.044 "state": "enabled", 00:14:58.044 "thread": "nvmf_tgt_poll_group_000", 00:14:58.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:58.044 "listen_address": { 00:14:58.044 "trtype": "RDMA", 00:14:58.044 "adrfam": "IPv4", 00:14:58.044 "traddr": "192.168.100.8", 00:14:58.044 "trsvcid": "4420" 00:14:58.044 }, 00:14:58.044 "peer_address": { 00:14:58.044 "trtype": "RDMA", 00:14:58.044 "adrfam": "IPv4", 00:14:58.044 "traddr": "192.168.100.8", 00:14:58.044 "trsvcid": "40500" 00:14:58.044 }, 00:14:58.044 "auth": { 00:14:58.044 "state": "completed", 00:14:58.044 "digest": "sha384", 00:14:58.044 "dhgroup": "ffdhe2048" 00:14:58.044 } 00:14:58.044 } 00:14:58.044 ]' 00:14:58.044 18:03:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.302 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:58.302 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.302 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:58.302 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.302 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.302 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.302 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.561 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:14:58.561 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:14:59.128 18:03:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.128 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:59.128 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.128 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.128 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.128 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.128 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:59.128 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:59.387 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:14:59.387 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.387 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:59.387 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:59.387 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:59.387 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.387 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:59.387 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.387 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.387 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.387 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:59.387 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.387 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.646 00:14:59.646 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.646 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.646 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.905 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.905 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.905 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.905 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.905 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.905 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.905 { 00:14:59.905 "cntlid": 63, 00:14:59.905 "qid": 0, 00:14:59.905 "state": "enabled", 00:14:59.905 "thread": "nvmf_tgt_poll_group_000", 00:14:59.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:59.905 "listen_address": { 00:14:59.905 "trtype": "RDMA", 00:14:59.905 "adrfam": "IPv4", 00:14:59.905 "traddr": "192.168.100.8", 00:14:59.905 "trsvcid": "4420" 00:14:59.905 }, 00:14:59.905 "peer_address": { 00:14:59.905 "trtype": "RDMA", 00:14:59.905 "adrfam": "IPv4", 00:14:59.905 "traddr": "192.168.100.8", 00:14:59.905 "trsvcid": "47111" 00:14:59.905 }, 00:14:59.905 "auth": { 00:14:59.905 "state": "completed", 00:14:59.905 "digest": "sha384", 00:14:59.905 "dhgroup": "ffdhe2048" 00:14:59.905 } 00:14:59.905 } 00:14:59.905 ]' 00:14:59.905 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.905 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:59.905 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.905 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:59.905 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.905 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.905 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.905 18:03:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.164 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:15:00.164 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:15:00.732 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.037 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:01.037 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.037 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.037 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.037 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.037 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.037 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:01.037 18:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:01.323 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:01.323 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.323 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:01.323 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:01.323 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:01.323 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.323 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.323 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.323 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.323 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.323 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.323 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.323 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.323 00:15:01.615 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.615 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.615 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.615 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.615 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.615 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.615 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.615 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.615 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.615 { 00:15:01.615 "cntlid": 65, 00:15:01.615 "qid": 0, 00:15:01.615 "state": "enabled", 00:15:01.615 "thread": "nvmf_tgt_poll_group_000", 00:15:01.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:01.616 "listen_address": { 00:15:01.616 "trtype": "RDMA", 00:15:01.616 "adrfam": "IPv4", 00:15:01.616 "traddr": "192.168.100.8", 00:15:01.616 "trsvcid": "4420" 00:15:01.616 }, 00:15:01.616 "peer_address": { 00:15:01.616 "trtype": "RDMA", 00:15:01.616 "adrfam": "IPv4", 00:15:01.616 "traddr": "192.168.100.8", 00:15:01.616 "trsvcid": "52904" 00:15:01.616 }, 00:15:01.616 "auth": { 00:15:01.616 "state": "completed", 00:15:01.616 "digest": "sha384", 00:15:01.616 "dhgroup": "ffdhe3072" 00:15:01.616 } 00:15:01.616 } 00:15:01.616 ]' 00:15:01.616 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.616 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:01.616 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.875 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:01.875 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.875 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.875 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.875 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.133 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:15:02.133 18:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:15:02.699 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.699 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:02.699 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.699 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.699 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.699 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.699 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:02.699 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:02.958 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:02.958 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.958 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:02.958 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:02.958 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:02.958 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.958 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.958 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.958 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.958 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.958 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.958 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.958 18:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.217 00:15:03.217 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.217 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.217 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.476 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.476 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.476 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.476 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.476 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.476 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.476 { 00:15:03.476 "cntlid": 67, 00:15:03.476 "qid": 0, 00:15:03.476 "state": "enabled", 00:15:03.476 "thread": "nvmf_tgt_poll_group_000", 00:15:03.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:03.476 "listen_address": { 00:15:03.476 "trtype": "RDMA", 00:15:03.476 "adrfam": "IPv4", 00:15:03.477 "traddr": "192.168.100.8", 00:15:03.477 "trsvcid": "4420" 00:15:03.477 }, 00:15:03.477 "peer_address": { 00:15:03.477 "trtype": "RDMA", 00:15:03.477 "adrfam": "IPv4", 00:15:03.477 "traddr": "192.168.100.8", 00:15:03.477 "trsvcid": "55808" 00:15:03.477 }, 00:15:03.477 "auth": { 00:15:03.477 "state": "completed", 00:15:03.477 "digest": "sha384", 00:15:03.477 "dhgroup": "ffdhe3072" 00:15:03.477 } 00:15:03.477 } 00:15:03.477 ]' 00:15:03.477 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.477 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:03.477 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.477 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:03.477 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.477 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.477 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.477 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.736 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:15:03.736 18:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:15:04.303 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.562 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:04.562 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.562 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.562 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.562 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.562 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:04.562 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:04.562 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:04.562 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.562 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:04.562 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:04.562 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:04.562 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.562 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.562 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.562 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.821 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.821 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.821 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.821 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.821 00:15:05.080 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.080 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.080 18:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.080 18:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.080 18:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.080 18:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.080 18:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.080 18:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.080 18:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.080 { 00:15:05.080 "cntlid": 69, 00:15:05.080 "qid": 0, 00:15:05.080 "state": "enabled", 00:15:05.080 "thread": "nvmf_tgt_poll_group_000", 00:15:05.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:05.080 "listen_address": { 00:15:05.080 "trtype": "RDMA", 00:15:05.080 "adrfam": "IPv4", 00:15:05.080 "traddr": "192.168.100.8", 00:15:05.080 "trsvcid": "4420" 00:15:05.080 }, 00:15:05.080 "peer_address": { 00:15:05.080 "trtype": "RDMA", 00:15:05.080 "adrfam": "IPv4", 00:15:05.080 "traddr": "192.168.100.8", 00:15:05.080 "trsvcid": "54985" 00:15:05.080 }, 00:15:05.080 "auth": { 00:15:05.080 "state": "completed", 00:15:05.080 "digest": "sha384", 00:15:05.080 "dhgroup": "ffdhe3072" 00:15:05.080 } 00:15:05.080 } 00:15:05.080 ]' 00:15:05.080 18:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.339 18:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:05.339 18:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.339 18:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:05.339 18:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.339 18:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.339 18:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.339 18:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.601 18:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:15:05.601 18:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:15:06.171 18:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.171 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:06.171 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.171 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.171 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.171 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.171 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:06.171 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:06.430 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:06.430 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.430 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:06.430 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:06.430 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:06.430 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.430 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:06.430 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.430 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.430 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.430 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:06.430 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.430 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.689 00:15:06.689 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.689 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.689 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.948 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.948 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.948 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.948 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.948 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.948 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.948 { 00:15:06.948 "cntlid": 71, 00:15:06.948 "qid": 0, 00:15:06.948 "state": "enabled", 00:15:06.948 "thread": "nvmf_tgt_poll_group_000", 00:15:06.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:06.948 "listen_address": { 00:15:06.948 "trtype": "RDMA", 00:15:06.948 "adrfam": "IPv4", 00:15:06.948 "traddr": "192.168.100.8", 00:15:06.948 "trsvcid": "4420" 00:15:06.948 }, 00:15:06.948 "peer_address": { 00:15:06.948 "trtype": "RDMA", 00:15:06.948 "adrfam": "IPv4", 00:15:06.948 "traddr": "192.168.100.8", 00:15:06.948 "trsvcid": "48921" 00:15:06.948 }, 00:15:06.948 "auth": { 00:15:06.948 "state": "completed", 00:15:06.948 "digest": "sha384", 00:15:06.948 "dhgroup": "ffdhe3072" 00:15:06.948 } 00:15:06.948 } 00:15:06.948 ]' 00:15:06.948 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.948 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.948 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.948 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:06.948 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.948 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.948 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.948 18:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.207 18:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:15:07.207 18:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:15:07.774 18:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.033 18:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:08.033 18:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.033 18:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.033 18:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.033 18:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.033 18:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.033 18:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:08.033 18:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:08.033 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:08.033 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.033 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:08.033 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:08.033 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:08.033 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.033 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.034 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.034 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.292 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.293 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.293 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.293 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.552 00:15:08.552 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.552 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.552 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.552 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.552 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.552 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.552 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.552 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.552 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.552 { 00:15:08.552 "cntlid": 73, 00:15:08.552 "qid": 0, 00:15:08.552 "state": "enabled", 00:15:08.552 "thread": "nvmf_tgt_poll_group_000", 00:15:08.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:08.552 "listen_address": { 00:15:08.552 "trtype": "RDMA", 00:15:08.552 "adrfam": "IPv4", 00:15:08.552 "traddr": "192.168.100.8", 00:15:08.552 "trsvcid": "4420" 00:15:08.552 }, 00:15:08.552 "peer_address": { 00:15:08.552 "trtype": "RDMA", 00:15:08.552 "adrfam": "IPv4", 00:15:08.552 "traddr": "192.168.100.8", 00:15:08.552 "trsvcid": "36517" 00:15:08.552 }, 00:15:08.552 "auth": { 00:15:08.552 "state": "completed", 00:15:08.552 "digest": "sha384", 00:15:08.552 "dhgroup": "ffdhe4096" 00:15:08.552 } 00:15:08.552 } 00:15:08.552 ]' 00:15:08.552 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.811 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.811 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.811 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:08.811 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.811 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.811 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.811 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.069 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:15:09.069 18:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:15:09.634 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.634 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:09.634 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.634 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.634 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.634 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.634 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:09.634 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:09.893 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:09.893 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.893 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:09.893 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:09.893 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:09.893 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.893 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.893 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.893 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.893 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.893 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.893 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.894 18:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.153 00:15:10.153 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.153 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.153 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.412 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.412 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.412 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.412 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.412 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.412 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.412 { 00:15:10.412 "cntlid": 75, 00:15:10.412 "qid": 0, 00:15:10.412 "state": "enabled", 00:15:10.412 "thread": "nvmf_tgt_poll_group_000", 00:15:10.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:10.412 "listen_address": { 00:15:10.412 "trtype": "RDMA", 00:15:10.412 "adrfam": "IPv4", 00:15:10.412 "traddr": "192.168.100.8", 00:15:10.412 "trsvcid": "4420" 00:15:10.412 }, 00:15:10.412 "peer_address": { 00:15:10.412 "trtype": "RDMA", 00:15:10.412 "adrfam": "IPv4", 00:15:10.412 "traddr": "192.168.100.8", 00:15:10.412 "trsvcid": "33857" 00:15:10.412 }, 00:15:10.412 "auth": { 00:15:10.412 "state": "completed", 00:15:10.412 "digest": "sha384", 00:15:10.412 "dhgroup": "ffdhe4096" 00:15:10.412 } 00:15:10.412 } 00:15:10.412 ]' 00:15:10.412 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.412 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.412 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.412 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:10.412 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.672 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.672 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.672 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.672 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:15:10.672 18:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:15:11.239 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.499 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:11.499 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.499 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.499 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.499 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.499 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:11.499 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:11.758 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:11.758 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.758 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:11.758 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:11.758 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:11.758 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.758 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.758 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.758 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.758 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.758 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.758 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.758 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.017 00:15:12.017 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.017 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.017 18:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.276 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.276 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.276 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.276 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.276 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.276 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.276 { 00:15:12.276 "cntlid": 77, 00:15:12.276 "qid": 0, 00:15:12.276 "state": "enabled", 00:15:12.276 "thread": "nvmf_tgt_poll_group_000", 00:15:12.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:12.276 "listen_address": { 00:15:12.276 "trtype": "RDMA", 00:15:12.276 "adrfam": "IPv4", 00:15:12.276 "traddr": "192.168.100.8", 00:15:12.276 "trsvcid": "4420" 00:15:12.276 }, 00:15:12.276 "peer_address": { 00:15:12.276 "trtype": "RDMA", 00:15:12.276 "adrfam": "IPv4", 00:15:12.276 "traddr": "192.168.100.8", 00:15:12.276 "trsvcid": "40800" 00:15:12.276 }, 00:15:12.276 "auth": { 00:15:12.276 "state": "completed", 00:15:12.276 "digest": "sha384", 00:15:12.276 "dhgroup": "ffdhe4096" 00:15:12.276 } 00:15:12.276 } 00:15:12.276 ]' 00:15:12.276 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.276 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.277 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.277 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:12.277 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.277 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.277 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.277 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.536 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:15:12.536 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:15:13.104 18:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.363 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:13.363 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.363 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.363 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.363 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.363 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:13.363 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:13.363 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:13.364 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.364 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:13.364 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:13.364 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:13.364 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.364 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:13.364 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.364 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.364 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.364 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:13.364 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.364 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.623 00:15:13.882 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.882 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.882 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.882 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.882 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.882 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.882 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.882 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.882 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.882 { 00:15:13.882 "cntlid": 79, 00:15:13.882 "qid": 0, 00:15:13.882 "state": "enabled", 00:15:13.882 "thread": "nvmf_tgt_poll_group_000", 00:15:13.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:13.882 "listen_address": { 00:15:13.882 "trtype": "RDMA", 00:15:13.882 "adrfam": "IPv4", 00:15:13.882 "traddr": "192.168.100.8", 00:15:13.882 "trsvcid": "4420" 00:15:13.882 }, 00:15:13.882 "peer_address": { 00:15:13.882 "trtype": "RDMA", 00:15:13.882 "adrfam": "IPv4", 00:15:13.882 "traddr": "192.168.100.8", 00:15:13.882 "trsvcid": "52693" 00:15:13.882 }, 00:15:13.882 "auth": { 00:15:13.882 "state": "completed", 00:15:13.882 "digest": "sha384", 00:15:13.882 "dhgroup": "ffdhe4096" 00:15:13.882 } 00:15:13.882 } 00:15:13.882 ]' 00:15:13.882 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.141 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.141 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.141 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:14.141 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.142 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.142 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.142 18:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.401 18:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:15:14.401 18:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:15:14.968 18:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.968 18:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:14.968 18:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.968 18:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.968 18:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.968 18:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:14.968 18:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.968 18:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:14.968 18:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:15.227 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:15.227 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.227 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:15.227 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:15.227 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:15.227 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.227 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.227 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.228 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.228 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.228 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.228 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.228 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.486 00:15:15.487 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.487 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.746 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.746 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.746 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.746 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.746 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.746 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.746 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.746 { 00:15:15.746 "cntlid": 81, 00:15:15.746 "qid": 0, 00:15:15.746 "state": "enabled", 00:15:15.746 "thread": "nvmf_tgt_poll_group_000", 00:15:15.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:15.746 "listen_address": { 00:15:15.746 "trtype": "RDMA", 00:15:15.746 "adrfam": "IPv4", 00:15:15.746 "traddr": "192.168.100.8", 00:15:15.746 "trsvcid": "4420" 00:15:15.746 }, 00:15:15.746 "peer_address": { 00:15:15.746 "trtype": "RDMA", 00:15:15.746 "adrfam": "IPv4", 00:15:15.746 "traddr": "192.168.100.8", 00:15:15.746 "trsvcid": "41045" 00:15:15.746 }, 00:15:15.746 "auth": { 00:15:15.746 "state": "completed", 00:15:15.746 "digest": "sha384", 00:15:15.746 "dhgroup": "ffdhe6144" 00:15:15.746 } 00:15:15.746 } 00:15:15.746 ]' 00:15:15.746 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.746 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:15.746 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.005 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:16.005 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.005 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.005 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.005 18:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.264 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:15:16.264 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:15:16.832 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.832 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:16.832 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.832 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.832 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.832 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.832 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:16.832 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:17.091 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:17.091 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.091 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:17.091 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:17.091 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:17.091 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.091 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.091 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.091 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.091 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.091 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.091 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.091 18:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.349 00:15:17.349 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.349 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.349 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.608 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.608 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.608 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.608 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.608 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.608 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.608 { 00:15:17.608 "cntlid": 83, 00:15:17.608 "qid": 0, 00:15:17.608 "state": "enabled", 00:15:17.608 "thread": "nvmf_tgt_poll_group_000", 00:15:17.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:17.608 "listen_address": { 00:15:17.608 "trtype": "RDMA", 00:15:17.608 "adrfam": "IPv4", 00:15:17.608 "traddr": "192.168.100.8", 00:15:17.608 "trsvcid": "4420" 00:15:17.608 }, 00:15:17.608 "peer_address": { 00:15:17.608 "trtype": "RDMA", 00:15:17.608 "adrfam": "IPv4", 00:15:17.608 "traddr": "192.168.100.8", 00:15:17.608 "trsvcid": "60224" 00:15:17.608 }, 00:15:17.608 "auth": { 00:15:17.608 "state": "completed", 00:15:17.608 "digest": "sha384", 00:15:17.608 "dhgroup": "ffdhe6144" 00:15:17.608 } 00:15:17.608 } 00:15:17.608 ]' 00:15:17.608 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.608 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.608 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.867 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:17.867 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.867 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.867 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.867 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.126 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:15:18.126 18:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:15:18.694 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.694 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:18.694 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.694 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.694 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.694 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.694 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:18.694 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:18.953 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:18.953 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.953 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:18.954 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:18.954 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:18.954 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.954 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.954 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.954 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.954 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.954 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.954 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.954 18:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.213 00:15:19.213 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.213 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.213 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.472 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.472 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.472 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.472 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.472 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.472 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.472 { 00:15:19.472 "cntlid": 85, 00:15:19.472 "qid": 0, 00:15:19.472 "state": "enabled", 00:15:19.472 "thread": "nvmf_tgt_poll_group_000", 00:15:19.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:19.472 "listen_address": { 00:15:19.472 "trtype": "RDMA", 00:15:19.472 "adrfam": "IPv4", 00:15:19.472 "traddr": "192.168.100.8", 00:15:19.472 "trsvcid": "4420" 00:15:19.472 }, 00:15:19.472 "peer_address": { 00:15:19.472 "trtype": "RDMA", 00:15:19.472 "adrfam": "IPv4", 00:15:19.472 "traddr": "192.168.100.8", 00:15:19.472 "trsvcid": "58807" 00:15:19.472 }, 00:15:19.472 "auth": { 00:15:19.472 "state": "completed", 00:15:19.472 "digest": "sha384", 00:15:19.472 "dhgroup": "ffdhe6144" 00:15:19.472 } 00:15:19.472 } 00:15:19.472 ]' 00:15:19.472 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.472 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:19.472 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.472 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:19.472 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.731 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.731 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.731 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.731 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:15:19.731 18:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.669 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:21.237 00:15:21.237 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.237 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.237 18:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.237 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.237 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.237 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.237 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.237 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.237 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.238 { 00:15:21.238 "cntlid": 87, 00:15:21.238 "qid": 0, 00:15:21.238 "state": "enabled", 00:15:21.238 "thread": "nvmf_tgt_poll_group_000", 00:15:21.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:21.238 "listen_address": { 00:15:21.238 "trtype": "RDMA", 00:15:21.238 "adrfam": "IPv4", 00:15:21.238 "traddr": "192.168.100.8", 00:15:21.238 "trsvcid": "4420" 00:15:21.238 }, 00:15:21.238 "peer_address": { 00:15:21.238 "trtype": "RDMA", 00:15:21.238 "adrfam": "IPv4", 00:15:21.238 "traddr": "192.168.100.8", 00:15:21.238 "trsvcid": "36785" 00:15:21.238 }, 00:15:21.238 "auth": { 00:15:21.238 "state": "completed", 00:15:21.238 "digest": "sha384", 00:15:21.238 "dhgroup": "ffdhe6144" 00:15:21.238 } 00:15:21.238 } 00:15:21.238 ]' 00:15:21.238 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.497 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.497 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.497 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:21.497 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.497 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.497 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.497 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.756 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:15:21.756 18:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:15:22.323 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.323 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:22.323 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.323 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.323 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.323 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:22.323 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.323 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:22.323 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:22.583 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:22.583 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.583 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:22.583 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:22.583 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:22.583 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.583 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.583 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.583 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.583 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.583 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.583 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.583 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.151 00:15:23.151 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.151 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.151 18:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.411 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.411 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.411 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.411 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.411 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.411 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.411 { 00:15:23.411 "cntlid": 89, 00:15:23.411 "qid": 0, 00:15:23.411 "state": "enabled", 00:15:23.411 "thread": "nvmf_tgt_poll_group_000", 00:15:23.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:23.411 "listen_address": { 00:15:23.411 "trtype": "RDMA", 00:15:23.411 "adrfam": "IPv4", 00:15:23.411 "traddr": "192.168.100.8", 00:15:23.411 "trsvcid": "4420" 00:15:23.411 }, 00:15:23.411 "peer_address": { 00:15:23.411 "trtype": "RDMA", 00:15:23.411 "adrfam": "IPv4", 00:15:23.411 "traddr": "192.168.100.8", 00:15:23.411 "trsvcid": "48669" 00:15:23.411 }, 00:15:23.411 "auth": { 00:15:23.411 "state": "completed", 00:15:23.411 "digest": "sha384", 00:15:23.411 "dhgroup": "ffdhe8192" 00:15:23.411 } 00:15:23.411 } 00:15:23.411 ]' 00:15:23.411 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.411 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.411 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.411 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:23.411 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.411 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.411 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.411 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.670 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:15:23.670 18:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:15:24.238 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.238 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:24.238 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.497 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.497 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.497 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.497 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:24.497 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:24.497 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:24.497 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.497 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:24.497 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:24.497 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:24.497 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.497 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.497 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.497 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.497 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.497 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.497 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.497 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.065 00:15:25.065 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.065 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.065 18:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.325 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.325 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.325 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.325 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.325 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.325 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.325 { 00:15:25.325 "cntlid": 91, 00:15:25.325 "qid": 0, 00:15:25.325 "state": "enabled", 00:15:25.325 "thread": "nvmf_tgt_poll_group_000", 00:15:25.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:25.325 "listen_address": { 00:15:25.325 "trtype": "RDMA", 00:15:25.325 "adrfam": "IPv4", 00:15:25.325 "traddr": "192.168.100.8", 00:15:25.325 "trsvcid": "4420" 00:15:25.325 }, 00:15:25.325 "peer_address": { 00:15:25.325 "trtype": "RDMA", 00:15:25.325 "adrfam": "IPv4", 00:15:25.325 "traddr": "192.168.100.8", 00:15:25.325 "trsvcid": "35710" 00:15:25.325 }, 00:15:25.325 "auth": { 00:15:25.325 "state": "completed", 00:15:25.325 "digest": "sha384", 00:15:25.325 "dhgroup": "ffdhe8192" 00:15:25.325 } 00:15:25.325 } 00:15:25.325 ]' 00:15:25.325 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.325 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.325 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.325 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:25.325 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.325 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.325 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.325 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.584 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:15:25.584 18:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:15:26.152 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.412 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:26.412 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.412 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.412 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.412 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.412 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:26.412 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:26.412 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:26.412 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.412 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:26.412 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:26.412 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:26.412 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.412 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.412 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.412 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.671 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.671 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.671 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.671 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.930 00:15:26.930 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.930 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.930 18:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.189 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.189 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.189 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.189 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.189 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.189 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.189 { 00:15:27.189 "cntlid": 93, 00:15:27.189 "qid": 0, 00:15:27.189 "state": "enabled", 00:15:27.189 "thread": "nvmf_tgt_poll_group_000", 00:15:27.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:27.189 "listen_address": { 00:15:27.189 "trtype": "RDMA", 00:15:27.189 "adrfam": "IPv4", 00:15:27.189 "traddr": "192.168.100.8", 00:15:27.189 "trsvcid": "4420" 00:15:27.189 }, 00:15:27.189 "peer_address": { 00:15:27.189 "trtype": "RDMA", 00:15:27.189 "adrfam": "IPv4", 00:15:27.189 "traddr": "192.168.100.8", 00:15:27.189 "trsvcid": "36662" 00:15:27.189 }, 00:15:27.189 "auth": { 00:15:27.189 "state": "completed", 00:15:27.189 "digest": "sha384", 00:15:27.189 "dhgroup": "ffdhe8192" 00:15:27.189 } 00:15:27.189 } 00:15:27.189 ]' 00:15:27.189 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.189 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.189 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.189 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:27.189 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.447 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.447 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.447 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.447 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:15:27.447 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:15:28.382 18:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.382 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.001 00:15:29.001 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.001 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.001 18:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.270 18:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.270 18:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.270 18:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.270 18:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.270 18:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.270 18:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.270 { 00:15:29.270 "cntlid": 95, 00:15:29.270 "qid": 0, 00:15:29.270 "state": "enabled", 00:15:29.270 "thread": "nvmf_tgt_poll_group_000", 00:15:29.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:29.270 "listen_address": { 00:15:29.270 "trtype": "RDMA", 00:15:29.270 "adrfam": "IPv4", 00:15:29.270 "traddr": "192.168.100.8", 00:15:29.270 "trsvcid": "4420" 00:15:29.270 }, 00:15:29.270 "peer_address": { 00:15:29.270 "trtype": "RDMA", 00:15:29.270 "adrfam": "IPv4", 00:15:29.270 "traddr": "192.168.100.8", 00:15:29.270 "trsvcid": "45173" 00:15:29.270 }, 00:15:29.270 "auth": { 00:15:29.270 "state": "completed", 00:15:29.270 "digest": "sha384", 00:15:29.270 "dhgroup": "ffdhe8192" 00:15:29.270 } 00:15:29.270 } 00:15:29.270 ]' 00:15:29.270 18:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.270 18:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.270 18:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.270 18:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:29.270 18:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.270 18:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.270 18:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.270 18:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.529 18:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:15:29.529 18:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:15:30.096 18:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.355 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:30.355 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.355 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.355 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.355 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:30.355 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:30.355 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.355 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:30.355 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:30.355 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:30.355 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.355 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:30.355 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:30.356 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:30.356 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.356 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.356 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.356 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.356 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.356 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.356 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.356 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.614 00:15:30.614 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.614 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.614 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.873 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.873 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.873 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.873 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.873 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.873 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.873 { 00:15:30.873 "cntlid": 97, 00:15:30.873 "qid": 0, 00:15:30.873 "state": "enabled", 00:15:30.873 "thread": "nvmf_tgt_poll_group_000", 00:15:30.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:30.873 "listen_address": { 00:15:30.873 "trtype": "RDMA", 00:15:30.873 "adrfam": "IPv4", 00:15:30.873 "traddr": "192.168.100.8", 00:15:30.873 "trsvcid": "4420" 00:15:30.873 }, 00:15:30.873 "peer_address": { 00:15:30.873 "trtype": "RDMA", 00:15:30.873 "adrfam": "IPv4", 00:15:30.873 "traddr": "192.168.100.8", 00:15:30.873 "trsvcid": "42803" 00:15:30.873 }, 00:15:30.873 "auth": { 00:15:30.873 "state": "completed", 00:15:30.873 "digest": "sha512", 00:15:30.873 "dhgroup": "null" 00:15:30.873 } 00:15:30.873 } 00:15:30.873 ]' 00:15:30.873 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.873 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:30.873 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.131 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:31.131 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.131 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.131 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.131 18:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.390 18:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:15:31.390 18:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:15:31.956 18:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.956 18:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:31.956 18:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.956 18:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.956 18:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.956 18:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.956 18:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:31.956 18:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:32.215 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:32.215 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.215 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:32.215 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:32.215 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:32.215 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.215 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.215 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.215 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.215 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.215 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.215 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.215 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.473 00:15:32.473 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.473 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.474 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.732 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.732 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.732 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.732 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.732 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.732 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.732 { 00:15:32.732 "cntlid": 99, 00:15:32.732 "qid": 0, 00:15:32.732 "state": "enabled", 00:15:32.732 "thread": "nvmf_tgt_poll_group_000", 00:15:32.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:32.732 "listen_address": { 00:15:32.732 "trtype": "RDMA", 00:15:32.732 "adrfam": "IPv4", 00:15:32.732 "traddr": "192.168.100.8", 00:15:32.732 "trsvcid": "4420" 00:15:32.732 }, 00:15:32.732 "peer_address": { 00:15:32.732 "trtype": "RDMA", 00:15:32.732 "adrfam": "IPv4", 00:15:32.732 "traddr": "192.168.100.8", 00:15:32.732 "trsvcid": "43117" 00:15:32.732 }, 00:15:32.732 "auth": { 00:15:32.732 "state": "completed", 00:15:32.732 "digest": "sha512", 00:15:32.732 "dhgroup": "null" 00:15:32.732 } 00:15:32.732 } 00:15:32.732 ]' 00:15:32.732 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.732 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:32.732 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.732 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:32.732 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.732 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.732 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.732 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.991 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:15:32.991 18:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:15:33.559 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.817 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:33.817 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.817 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.817 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.818 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.818 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:33.818 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:33.818 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:33.818 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.818 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:34.077 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:34.077 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:34.077 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.077 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.077 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.077 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.077 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.077 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.077 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.077 18:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.077 00:15:34.335 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.335 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.335 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.335 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.335 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.335 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.335 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.335 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.335 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.335 { 00:15:34.335 "cntlid": 101, 00:15:34.335 "qid": 0, 00:15:34.335 "state": "enabled", 00:15:34.335 "thread": "nvmf_tgt_poll_group_000", 00:15:34.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:34.335 "listen_address": { 00:15:34.335 "trtype": "RDMA", 00:15:34.335 "adrfam": "IPv4", 00:15:34.335 "traddr": "192.168.100.8", 00:15:34.335 "trsvcid": "4420" 00:15:34.335 }, 00:15:34.335 "peer_address": { 00:15:34.335 "trtype": "RDMA", 00:15:34.335 "adrfam": "IPv4", 00:15:34.335 "traddr": "192.168.100.8", 00:15:34.335 "trsvcid": "47149" 00:15:34.335 }, 00:15:34.335 "auth": { 00:15:34.335 "state": "completed", 00:15:34.335 "digest": "sha512", 00:15:34.335 "dhgroup": "null" 00:15:34.335 } 00:15:34.335 } 00:15:34.336 ]' 00:15:34.336 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.336 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:34.336 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.594 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:34.594 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.594 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.594 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.594 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.853 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:15:34.853 18:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:15:35.421 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.421 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:35.421 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.421 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.421 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.421 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.421 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:35.421 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:35.680 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:15:35.680 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.680 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:35.680 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:35.680 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:35.680 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.680 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:35.680 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.680 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.680 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.680 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:35.680 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.680 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.939 00:15:35.939 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.939 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.939 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.198 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.198 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.198 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.198 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.198 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.198 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.198 { 00:15:36.198 "cntlid": 103, 00:15:36.198 "qid": 0, 00:15:36.198 "state": "enabled", 00:15:36.198 "thread": "nvmf_tgt_poll_group_000", 00:15:36.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:36.198 "listen_address": { 00:15:36.198 "trtype": "RDMA", 00:15:36.198 "adrfam": "IPv4", 00:15:36.198 "traddr": "192.168.100.8", 00:15:36.198 "trsvcid": "4420" 00:15:36.198 }, 00:15:36.198 "peer_address": { 00:15:36.198 "trtype": "RDMA", 00:15:36.198 "adrfam": "IPv4", 00:15:36.198 "traddr": "192.168.100.8", 00:15:36.198 "trsvcid": "43944" 00:15:36.198 }, 00:15:36.198 "auth": { 00:15:36.198 "state": "completed", 00:15:36.198 "digest": "sha512", 00:15:36.198 "dhgroup": "null" 00:15:36.198 } 00:15:36.198 } 00:15:36.198 ]' 00:15:36.198 18:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.198 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:36.198 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.198 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:36.198 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.199 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.199 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.199 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.457 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:15:36.457 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:15:37.025 18:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.283 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:37.283 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.284 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.284 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.284 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:37.284 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.284 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:37.284 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:37.284 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:15:37.284 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.284 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:37.284 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:37.284 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:37.284 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.284 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.284 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.284 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.284 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.284 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.543 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.543 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.543 00:15:37.543 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.543 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.543 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.802 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.802 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.802 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.802 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.802 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.802 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.802 { 00:15:37.802 "cntlid": 105, 00:15:37.802 "qid": 0, 00:15:37.802 "state": "enabled", 00:15:37.802 "thread": "nvmf_tgt_poll_group_000", 00:15:37.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:37.802 "listen_address": { 00:15:37.802 "trtype": "RDMA", 00:15:37.802 "adrfam": "IPv4", 00:15:37.802 "traddr": "192.168.100.8", 00:15:37.802 "trsvcid": "4420" 00:15:37.802 }, 00:15:37.802 "peer_address": { 00:15:37.802 "trtype": "RDMA", 00:15:37.802 "adrfam": "IPv4", 00:15:37.802 "traddr": "192.168.100.8", 00:15:37.802 "trsvcid": "33078" 00:15:37.802 }, 00:15:37.802 "auth": { 00:15:37.802 "state": "completed", 00:15:37.802 "digest": "sha512", 00:15:37.802 "dhgroup": "ffdhe2048" 00:15:37.802 } 00:15:37.802 } 00:15:37.802 ]' 00:15:37.802 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.802 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:37.802 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.061 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:38.061 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.061 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.061 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.061 18:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.319 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:15:38.320 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:15:38.887 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.887 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:38.887 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.887 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.887 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.887 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.887 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:38.887 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:39.145 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:15:39.145 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.145 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:39.145 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:39.145 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:39.145 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.145 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.145 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.145 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.145 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.145 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.145 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.145 18:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.404 00:15:39.404 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.404 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.404 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.663 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.663 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.663 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.663 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.663 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.663 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.663 { 00:15:39.663 "cntlid": 107, 00:15:39.663 "qid": 0, 00:15:39.663 "state": "enabled", 00:15:39.663 "thread": "nvmf_tgt_poll_group_000", 00:15:39.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:39.663 "listen_address": { 00:15:39.663 "trtype": "RDMA", 00:15:39.663 "adrfam": "IPv4", 00:15:39.663 "traddr": "192.168.100.8", 00:15:39.663 "trsvcid": "4420" 00:15:39.663 }, 00:15:39.663 "peer_address": { 00:15:39.663 "trtype": "RDMA", 00:15:39.663 "adrfam": "IPv4", 00:15:39.663 "traddr": "192.168.100.8", 00:15:39.663 "trsvcid": "53497" 00:15:39.663 }, 00:15:39.663 "auth": { 00:15:39.663 "state": "completed", 00:15:39.663 "digest": "sha512", 00:15:39.663 "dhgroup": "ffdhe2048" 00:15:39.663 } 00:15:39.663 } 00:15:39.663 ]' 00:15:39.663 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.663 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:39.663 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.663 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:39.663 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.663 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.663 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.663 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.922 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:15:39.922 18:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:15:40.490 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.749 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:40.749 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.749 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.749 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.749 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.749 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:40.749 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:40.749 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:15:40.749 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.749 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:40.749 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:40.749 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:40.749 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.749 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.749 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.749 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.008 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.008 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.008 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.008 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.008 00:15:41.267 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.267 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.267 18:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.267 18:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.267 18:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.267 18:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.267 18:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.267 18:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.267 18:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.267 { 00:15:41.267 "cntlid": 109, 00:15:41.267 "qid": 0, 00:15:41.267 "state": "enabled", 00:15:41.267 "thread": "nvmf_tgt_poll_group_000", 00:15:41.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:41.267 "listen_address": { 00:15:41.267 "trtype": "RDMA", 00:15:41.267 "adrfam": "IPv4", 00:15:41.267 "traddr": "192.168.100.8", 00:15:41.267 "trsvcid": "4420" 00:15:41.267 }, 00:15:41.267 "peer_address": { 00:15:41.267 "trtype": "RDMA", 00:15:41.267 "adrfam": "IPv4", 00:15:41.267 "traddr": "192.168.100.8", 00:15:41.267 "trsvcid": "48328" 00:15:41.267 }, 00:15:41.267 "auth": { 00:15:41.267 "state": "completed", 00:15:41.267 "digest": "sha512", 00:15:41.267 "dhgroup": "ffdhe2048" 00:15:41.267 } 00:15:41.267 } 00:15:41.267 ]' 00:15:41.267 18:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.525 18:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:41.526 18:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.526 18:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:41.526 18:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.526 18:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.526 18:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.526 18:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.785 18:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:15:41.785 18:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:15:42.352 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.352 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:42.352 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.352 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.352 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.352 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.352 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:42.352 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:42.610 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:15:42.610 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.610 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:42.610 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:42.610 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:42.610 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.610 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:42.610 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.610 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.610 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.610 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:42.610 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:42.610 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:42.869 00:15:42.869 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.869 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.869 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.128 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.128 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.128 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.128 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.128 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.128 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.128 { 00:15:43.128 "cntlid": 111, 00:15:43.128 "qid": 0, 00:15:43.128 "state": "enabled", 00:15:43.128 "thread": "nvmf_tgt_poll_group_000", 00:15:43.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:43.128 "listen_address": { 00:15:43.128 "trtype": "RDMA", 00:15:43.128 "adrfam": "IPv4", 00:15:43.128 "traddr": "192.168.100.8", 00:15:43.128 "trsvcid": "4420" 00:15:43.128 }, 00:15:43.128 "peer_address": { 00:15:43.128 "trtype": "RDMA", 00:15:43.128 "adrfam": "IPv4", 00:15:43.128 "traddr": "192.168.100.8", 00:15:43.128 "trsvcid": "34895" 00:15:43.128 }, 00:15:43.128 "auth": { 00:15:43.128 "state": "completed", 00:15:43.128 "digest": "sha512", 00:15:43.128 "dhgroup": "ffdhe2048" 00:15:43.128 } 00:15:43.128 } 00:15:43.128 ]' 00:15:43.128 18:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.128 18:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:43.128 18:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.128 18:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:43.128 18:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.128 18:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.128 18:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.128 18:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.387 18:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:15:43.387 18:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:15:43.954 18:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.213 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:44.213 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.213 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.213 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.213 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:44.213 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.213 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:44.213 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:44.472 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:15:44.472 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.472 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:44.472 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:44.472 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:44.472 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.472 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.472 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.472 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.472 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.472 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.472 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.472 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.731 00:15:44.731 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.731 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.731 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.731 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.731 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.731 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.731 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.989 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.989 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.989 { 00:15:44.989 "cntlid": 113, 00:15:44.989 "qid": 0, 00:15:44.989 "state": "enabled", 00:15:44.989 "thread": "nvmf_tgt_poll_group_000", 00:15:44.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:44.989 "listen_address": { 00:15:44.989 "trtype": "RDMA", 00:15:44.989 "adrfam": "IPv4", 00:15:44.989 "traddr": "192.168.100.8", 00:15:44.989 "trsvcid": "4420" 00:15:44.989 }, 00:15:44.989 "peer_address": { 00:15:44.989 "trtype": "RDMA", 00:15:44.989 "adrfam": "IPv4", 00:15:44.989 "traddr": "192.168.100.8", 00:15:44.989 "trsvcid": "44093" 00:15:44.989 }, 00:15:44.989 "auth": { 00:15:44.989 "state": "completed", 00:15:44.989 "digest": "sha512", 00:15:44.989 "dhgroup": "ffdhe3072" 00:15:44.989 } 00:15:44.989 } 00:15:44.989 ]' 00:15:44.989 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.989 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:44.989 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.989 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:44.989 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.989 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.989 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.989 18:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.247 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:15:45.247 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:15:45.815 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.815 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:45.815 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.815 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.815 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.815 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.815 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:45.815 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:46.074 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:15:46.074 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.074 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:46.074 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:46.074 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:46.074 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.074 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.074 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.074 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.074 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.074 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.074 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.074 18:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.332 00:15:46.332 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.332 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.332 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.592 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.592 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.592 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.592 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.592 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.592 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.592 { 00:15:46.592 "cntlid": 115, 00:15:46.592 "qid": 0, 00:15:46.592 "state": "enabled", 00:15:46.592 "thread": "nvmf_tgt_poll_group_000", 00:15:46.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:46.592 "listen_address": { 00:15:46.592 "trtype": "RDMA", 00:15:46.592 "adrfam": "IPv4", 00:15:46.592 "traddr": "192.168.100.8", 00:15:46.592 "trsvcid": "4420" 00:15:46.592 }, 00:15:46.592 "peer_address": { 00:15:46.592 "trtype": "RDMA", 00:15:46.592 "adrfam": "IPv4", 00:15:46.592 "traddr": "192.168.100.8", 00:15:46.592 "trsvcid": "55626" 00:15:46.592 }, 00:15:46.592 "auth": { 00:15:46.592 "state": "completed", 00:15:46.592 "digest": "sha512", 00:15:46.592 "dhgroup": "ffdhe3072" 00:15:46.592 } 00:15:46.592 } 00:15:46.592 ]' 00:15:46.592 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.592 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:46.592 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.592 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:46.592 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.851 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.851 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.851 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.851 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:15:46.851 18:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.788 18:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.047 00:15:48.047 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.047 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.047 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.306 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.306 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.306 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.306 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.306 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.306 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.306 { 00:15:48.306 "cntlid": 117, 00:15:48.306 "qid": 0, 00:15:48.306 "state": "enabled", 00:15:48.306 "thread": "nvmf_tgt_poll_group_000", 00:15:48.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:48.306 "listen_address": { 00:15:48.306 "trtype": "RDMA", 00:15:48.306 "adrfam": "IPv4", 00:15:48.306 "traddr": "192.168.100.8", 00:15:48.306 "trsvcid": "4420" 00:15:48.306 }, 00:15:48.306 "peer_address": { 00:15:48.306 "trtype": "RDMA", 00:15:48.306 "adrfam": "IPv4", 00:15:48.306 "traddr": "192.168.100.8", 00:15:48.306 "trsvcid": "49060" 00:15:48.306 }, 00:15:48.306 "auth": { 00:15:48.306 "state": "completed", 00:15:48.306 "digest": "sha512", 00:15:48.306 "dhgroup": "ffdhe3072" 00:15:48.306 } 00:15:48.306 } 00:15:48.306 ]' 00:15:48.306 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.306 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:48.306 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.565 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:48.565 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.565 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.565 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.565 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.565 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:15:48.565 18:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:15:49.500 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:49.501 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:49.759 00:15:50.018 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.018 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.018 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.018 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.018 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.018 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.018 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.018 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.018 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.018 { 00:15:50.018 "cntlid": 119, 00:15:50.018 "qid": 0, 00:15:50.018 "state": "enabled", 00:15:50.018 "thread": "nvmf_tgt_poll_group_000", 00:15:50.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:50.018 "listen_address": { 00:15:50.018 "trtype": "RDMA", 00:15:50.018 "adrfam": "IPv4", 00:15:50.018 "traddr": "192.168.100.8", 00:15:50.018 "trsvcid": "4420" 00:15:50.018 }, 00:15:50.018 "peer_address": { 00:15:50.018 "trtype": "RDMA", 00:15:50.018 "adrfam": "IPv4", 00:15:50.018 "traddr": "192.168.100.8", 00:15:50.018 "trsvcid": "60415" 00:15:50.018 }, 00:15:50.018 "auth": { 00:15:50.018 "state": "completed", 00:15:50.018 "digest": "sha512", 00:15:50.018 "dhgroup": "ffdhe3072" 00:15:50.018 } 00:15:50.018 } 00:15:50.018 ]' 00:15:50.018 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.018 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.277 18:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.277 18:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:50.277 18:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.277 18:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.277 18:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.277 18:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.535 18:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:15:50.535 18:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:15:51.103 18:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.103 18:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:51.103 18:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.103 18:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.103 18:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.103 18:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.103 18:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.103 18:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:51.103 18:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:51.362 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:15:51.362 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.362 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:51.362 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:51.362 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:51.362 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.362 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.362 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.362 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.362 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.362 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.363 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.363 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.621 00:15:51.621 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.621 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.621 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.880 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.880 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.880 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.880 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.880 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.880 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.880 { 00:15:51.880 "cntlid": 121, 00:15:51.880 "qid": 0, 00:15:51.880 "state": "enabled", 00:15:51.880 "thread": "nvmf_tgt_poll_group_000", 00:15:51.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:51.880 "listen_address": { 00:15:51.880 "trtype": "RDMA", 00:15:51.880 "adrfam": "IPv4", 00:15:51.880 "traddr": "192.168.100.8", 00:15:51.880 "trsvcid": "4420" 00:15:51.880 }, 00:15:51.880 "peer_address": { 00:15:51.880 "trtype": "RDMA", 00:15:51.880 "adrfam": "IPv4", 00:15:51.880 "traddr": "192.168.100.8", 00:15:51.880 "trsvcid": "56846" 00:15:51.880 }, 00:15:51.880 "auth": { 00:15:51.880 "state": "completed", 00:15:51.880 "digest": "sha512", 00:15:51.880 "dhgroup": "ffdhe4096" 00:15:51.880 } 00:15:51.880 } 00:15:51.880 ]' 00:15:51.880 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.880 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:51.880 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.880 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:51.880 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.880 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.880 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.880 18:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.139 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:15:52.139 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:15:52.707 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.965 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:52.965 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.965 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.965 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.965 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.965 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:52.965 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:53.224 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:15:53.224 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.224 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:53.224 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:53.224 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:53.224 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.224 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.224 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.224 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.224 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.224 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.224 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.224 18:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.483 00:15:53.483 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.483 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.483 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.742 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.742 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.742 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.742 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.742 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.742 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.742 { 00:15:53.742 "cntlid": 123, 00:15:53.742 "qid": 0, 00:15:53.742 "state": "enabled", 00:15:53.742 "thread": "nvmf_tgt_poll_group_000", 00:15:53.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:53.742 "listen_address": { 00:15:53.742 "trtype": "RDMA", 00:15:53.742 "adrfam": "IPv4", 00:15:53.742 "traddr": "192.168.100.8", 00:15:53.742 "trsvcid": "4420" 00:15:53.742 }, 00:15:53.742 "peer_address": { 00:15:53.742 "trtype": "RDMA", 00:15:53.742 "adrfam": "IPv4", 00:15:53.742 "traddr": "192.168.100.8", 00:15:53.742 "trsvcid": "54010" 00:15:53.742 }, 00:15:53.742 "auth": { 00:15:53.742 "state": "completed", 00:15:53.742 "digest": "sha512", 00:15:53.742 "dhgroup": "ffdhe4096" 00:15:53.742 } 00:15:53.742 } 00:15:53.742 ]' 00:15:53.742 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.742 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.742 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.742 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:53.742 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.742 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.742 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.742 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.001 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:15:54.001 18:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:15:54.568 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.568 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.828 18:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.086 00:15:55.086 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.087 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.087 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.345 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.345 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.345 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.345 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.345 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.345 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.345 { 00:15:55.345 "cntlid": 125, 00:15:55.345 "qid": 0, 00:15:55.345 "state": "enabled", 00:15:55.345 "thread": "nvmf_tgt_poll_group_000", 00:15:55.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:55.345 "listen_address": { 00:15:55.345 "trtype": "RDMA", 00:15:55.345 "adrfam": "IPv4", 00:15:55.345 "traddr": "192.168.100.8", 00:15:55.345 "trsvcid": "4420" 00:15:55.345 }, 00:15:55.345 "peer_address": { 00:15:55.345 "trtype": "RDMA", 00:15:55.345 "adrfam": "IPv4", 00:15:55.345 "traddr": "192.168.100.8", 00:15:55.345 "trsvcid": "56281" 00:15:55.345 }, 00:15:55.345 "auth": { 00:15:55.345 "state": "completed", 00:15:55.345 "digest": "sha512", 00:15:55.345 "dhgroup": "ffdhe4096" 00:15:55.345 } 00:15:55.345 } 00:15:55.345 ]' 00:15:55.345 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.345 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.345 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.604 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.604 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.604 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.604 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.605 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.605 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:15:55.605 18:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.540 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.799 00:15:57.058 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.058 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.058 18:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.058 18:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.058 18:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.058 18:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.058 18:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.058 18:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.058 18:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.058 { 00:15:57.058 "cntlid": 127, 00:15:57.058 "qid": 0, 00:15:57.058 "state": "enabled", 00:15:57.058 "thread": "nvmf_tgt_poll_group_000", 00:15:57.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:57.058 "listen_address": { 00:15:57.058 "trtype": "RDMA", 00:15:57.058 "adrfam": "IPv4", 00:15:57.058 "traddr": "192.168.100.8", 00:15:57.058 "trsvcid": "4420" 00:15:57.058 }, 00:15:57.058 "peer_address": { 00:15:57.058 "trtype": "RDMA", 00:15:57.058 "adrfam": "IPv4", 00:15:57.058 "traddr": "192.168.100.8", 00:15:57.058 "trsvcid": "34427" 00:15:57.058 }, 00:15:57.058 "auth": { 00:15:57.058 "state": "completed", 00:15:57.058 "digest": "sha512", 00:15:57.058 "dhgroup": "ffdhe4096" 00:15:57.058 } 00:15:57.058 } 00:15:57.058 ]' 00:15:57.058 18:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.317 18:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.317 18:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.317 18:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:57.317 18:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.317 18:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.317 18:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.317 18:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.586 18:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:15:57.586 18:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:15:58.206 18:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.206 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:58.206 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.206 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.206 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.206 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.206 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.206 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:58.206 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:58.464 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:15:58.464 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.464 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:58.464 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:58.464 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:58.464 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.464 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.464 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.464 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.464 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.464 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.464 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.464 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.723 00:15:58.723 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.723 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.723 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.982 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.982 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.982 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.982 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.982 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.982 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.982 { 00:15:58.982 "cntlid": 129, 00:15:58.982 "qid": 0, 00:15:58.982 "state": "enabled", 00:15:58.982 "thread": "nvmf_tgt_poll_group_000", 00:15:58.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:58.982 "listen_address": { 00:15:58.982 "trtype": "RDMA", 00:15:58.982 "adrfam": "IPv4", 00:15:58.982 "traddr": "192.168.100.8", 00:15:58.982 "trsvcid": "4420" 00:15:58.982 }, 00:15:58.982 "peer_address": { 00:15:58.982 "trtype": "RDMA", 00:15:58.982 "adrfam": "IPv4", 00:15:58.982 "traddr": "192.168.100.8", 00:15:58.982 "trsvcid": "45333" 00:15:58.982 }, 00:15:58.982 "auth": { 00:15:58.982 "state": "completed", 00:15:58.982 "digest": "sha512", 00:15:58.982 "dhgroup": "ffdhe6144" 00:15:58.982 } 00:15:58.982 } 00:15:58.982 ]' 00:15:58.982 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.982 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.982 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.982 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:58.982 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.241 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.241 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.242 18:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.242 18:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:15:59.242 18:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:16:00.177 18:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.178 18:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:00.178 18:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.178 18:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.178 18:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.178 18:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.178 18:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:00.178 18:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:00.178 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:00.178 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.178 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:00.178 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:00.178 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:00.178 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.178 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.178 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.178 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.178 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.178 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.178 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.178 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.745 00:16:00.745 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.745 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.745 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.745 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.745 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.745 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.745 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.745 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.004 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.004 { 00:16:01.004 "cntlid": 131, 00:16:01.004 "qid": 0, 00:16:01.004 "state": "enabled", 00:16:01.004 "thread": "nvmf_tgt_poll_group_000", 00:16:01.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:01.004 "listen_address": { 00:16:01.004 "trtype": "RDMA", 00:16:01.004 "adrfam": "IPv4", 00:16:01.004 "traddr": "192.168.100.8", 00:16:01.004 "trsvcid": "4420" 00:16:01.004 }, 00:16:01.004 "peer_address": { 00:16:01.004 "trtype": "RDMA", 00:16:01.004 "adrfam": "IPv4", 00:16:01.004 "traddr": "192.168.100.8", 00:16:01.004 "trsvcid": "33086" 00:16:01.004 }, 00:16:01.004 "auth": { 00:16:01.004 "state": "completed", 00:16:01.004 "digest": "sha512", 00:16:01.004 "dhgroup": "ffdhe6144" 00:16:01.004 } 00:16:01.004 } 00:16:01.004 ]' 00:16:01.004 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.004 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.004 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.004 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:01.004 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.004 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.004 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.004 18:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.263 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:16:01.263 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:16:01.830 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.830 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:01.830 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.830 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.830 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.830 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.830 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:01.830 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:02.089 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:02.089 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.089 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:02.089 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:02.089 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:02.089 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.089 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.089 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.089 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.089 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.089 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.090 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.090 18:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.348 00:16:02.606 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.606 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.606 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.606 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.606 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.606 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.606 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.606 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.606 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.606 { 00:16:02.606 "cntlid": 133, 00:16:02.606 "qid": 0, 00:16:02.606 "state": "enabled", 00:16:02.607 "thread": "nvmf_tgt_poll_group_000", 00:16:02.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:02.607 "listen_address": { 00:16:02.607 "trtype": "RDMA", 00:16:02.607 "adrfam": "IPv4", 00:16:02.607 "traddr": "192.168.100.8", 00:16:02.607 "trsvcid": "4420" 00:16:02.607 }, 00:16:02.607 "peer_address": { 00:16:02.607 "trtype": "RDMA", 00:16:02.607 "adrfam": "IPv4", 00:16:02.607 "traddr": "192.168.100.8", 00:16:02.607 "trsvcid": "58038" 00:16:02.607 }, 00:16:02.607 "auth": { 00:16:02.607 "state": "completed", 00:16:02.607 "digest": "sha512", 00:16:02.607 "dhgroup": "ffdhe6144" 00:16:02.607 } 00:16:02.607 } 00:16:02.607 ]' 00:16:02.607 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.865 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:02.865 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.865 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:02.865 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.865 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.865 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.865 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.123 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:16:03.124 18:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:16:03.691 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.691 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:03.691 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.691 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.691 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.691 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.691 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:03.691 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:03.950 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:03.950 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.950 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:03.950 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:03.950 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:03.950 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.950 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:16:03.950 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.950 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.951 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.951 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:03.951 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.951 18:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.209 00:16:04.209 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.209 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.209 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.468 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.468 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.468 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.468 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.468 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.468 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.468 { 00:16:04.468 "cntlid": 135, 00:16:04.468 "qid": 0, 00:16:04.468 "state": "enabled", 00:16:04.469 "thread": "nvmf_tgt_poll_group_000", 00:16:04.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:04.469 "listen_address": { 00:16:04.469 "trtype": "RDMA", 00:16:04.469 "adrfam": "IPv4", 00:16:04.469 "traddr": "192.168.100.8", 00:16:04.469 "trsvcid": "4420" 00:16:04.469 }, 00:16:04.469 "peer_address": { 00:16:04.469 "trtype": "RDMA", 00:16:04.469 "adrfam": "IPv4", 00:16:04.469 "traddr": "192.168.100.8", 00:16:04.469 "trsvcid": "59326" 00:16:04.469 }, 00:16:04.469 "auth": { 00:16:04.469 "state": "completed", 00:16:04.469 "digest": "sha512", 00:16:04.469 "dhgroup": "ffdhe6144" 00:16:04.469 } 00:16:04.469 } 00:16:04.469 ]' 00:16:04.469 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.469 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.728 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.728 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:04.728 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.728 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.728 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.728 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.987 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:16:04.987 18:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:16:05.554 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.554 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:05.554 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.554 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.554 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.554 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.554 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.554 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:05.554 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:05.813 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:05.813 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.813 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:05.813 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:05.813 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:05.813 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.813 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.813 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.813 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.813 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.813 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.813 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.813 18:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.380 00:16:06.380 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.380 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.380 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.380 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.380 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.380 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.380 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.380 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.380 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.380 { 00:16:06.380 "cntlid": 137, 00:16:06.380 "qid": 0, 00:16:06.380 "state": "enabled", 00:16:06.380 "thread": "nvmf_tgt_poll_group_000", 00:16:06.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:06.380 "listen_address": { 00:16:06.380 "trtype": "RDMA", 00:16:06.380 "adrfam": "IPv4", 00:16:06.380 "traddr": "192.168.100.8", 00:16:06.380 "trsvcid": "4420" 00:16:06.380 }, 00:16:06.380 "peer_address": { 00:16:06.380 "trtype": "RDMA", 00:16:06.380 "adrfam": "IPv4", 00:16:06.380 "traddr": "192.168.100.8", 00:16:06.380 "trsvcid": "56749" 00:16:06.380 }, 00:16:06.380 "auth": { 00:16:06.380 "state": "completed", 00:16:06.380 "digest": "sha512", 00:16:06.380 "dhgroup": "ffdhe8192" 00:16:06.380 } 00:16:06.380 } 00:16:06.380 ]' 00:16:06.380 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.380 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.380 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.639 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:06.639 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.639 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.639 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.639 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.898 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:16:06.898 18:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:16:07.464 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.464 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:07.464 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.464 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.464 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.464 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.464 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:07.464 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:07.723 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:07.723 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.723 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:07.723 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:07.723 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:07.723 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.723 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.723 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.723 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.723 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.723 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.723 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.724 18:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.292 00:16:08.292 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.292 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.292 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.551 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.551 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.551 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.551 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.551 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.551 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.551 { 00:16:08.551 "cntlid": 139, 00:16:08.551 "qid": 0, 00:16:08.551 "state": "enabled", 00:16:08.551 "thread": "nvmf_tgt_poll_group_000", 00:16:08.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:08.551 "listen_address": { 00:16:08.551 "trtype": "RDMA", 00:16:08.551 "adrfam": "IPv4", 00:16:08.551 "traddr": "192.168.100.8", 00:16:08.551 "trsvcid": "4420" 00:16:08.551 }, 00:16:08.551 "peer_address": { 00:16:08.551 "trtype": "RDMA", 00:16:08.551 "adrfam": "IPv4", 00:16:08.551 "traddr": "192.168.100.8", 00:16:08.551 "trsvcid": "39156" 00:16:08.551 }, 00:16:08.551 "auth": { 00:16:08.551 "state": "completed", 00:16:08.551 "digest": "sha512", 00:16:08.551 "dhgroup": "ffdhe8192" 00:16:08.551 } 00:16:08.551 } 00:16:08.551 ]' 00:16:08.551 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.551 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.551 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.551 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:08.551 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.551 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.551 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.551 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.810 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:16:08.810 18:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: --dhchap-ctrl-secret DHHC-1:02:ZWE3OGY2OGYwNTQxYzMzMWI3OGZkNjM2MmY1OTI4YTVjOTNkNDA4MjNjYmE4YzgxF8hHhA==: 00:16:09.378 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.637 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:09.637 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.637 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.638 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.638 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.638 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:09.638 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:09.638 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:09.638 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.638 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:09.638 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:09.638 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:09.638 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.638 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.638 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.638 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.638 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.638 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.638 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.638 18:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.204 00:16:10.204 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.204 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.204 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.462 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.462 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.462 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.462 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.462 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.462 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.462 { 00:16:10.462 "cntlid": 141, 00:16:10.462 "qid": 0, 00:16:10.462 "state": "enabled", 00:16:10.462 "thread": "nvmf_tgt_poll_group_000", 00:16:10.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:10.462 "listen_address": { 00:16:10.462 "trtype": "RDMA", 00:16:10.462 "adrfam": "IPv4", 00:16:10.462 "traddr": "192.168.100.8", 00:16:10.462 "trsvcid": "4420" 00:16:10.462 }, 00:16:10.462 "peer_address": { 00:16:10.462 "trtype": "RDMA", 00:16:10.462 "adrfam": "IPv4", 00:16:10.462 "traddr": "192.168.100.8", 00:16:10.462 "trsvcid": "40169" 00:16:10.462 }, 00:16:10.462 "auth": { 00:16:10.462 "state": "completed", 00:16:10.462 "digest": "sha512", 00:16:10.462 "dhgroup": "ffdhe8192" 00:16:10.462 } 00:16:10.462 } 00:16:10.462 ]' 00:16:10.462 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.462 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.462 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.462 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:10.462 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.462 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.462 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.462 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.721 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:16:10.721 18:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:01:N2M3OTgwNmYwYjRjYjMzNzBhZTUwMDdhYWE1YmU0YjFcJxYr: 00:16:11.288 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.547 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:11.547 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.547 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.547 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.547 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.547 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:11.547 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:11.806 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:11.806 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.806 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:11.806 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:11.806 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:11.806 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.806 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:16:11.806 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.806 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.806 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.806 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:11.806 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.806 18:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.064 00:16:12.323 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.323 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.323 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.323 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.323 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.323 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.323 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.323 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.323 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.323 { 00:16:12.323 "cntlid": 143, 00:16:12.323 "qid": 0, 00:16:12.323 "state": "enabled", 00:16:12.323 "thread": "nvmf_tgt_poll_group_000", 00:16:12.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:12.323 "listen_address": { 00:16:12.323 "trtype": "RDMA", 00:16:12.323 "adrfam": "IPv4", 00:16:12.323 "traddr": "192.168.100.8", 00:16:12.323 "trsvcid": "4420" 00:16:12.323 }, 00:16:12.323 "peer_address": { 00:16:12.323 "trtype": "RDMA", 00:16:12.323 "adrfam": "IPv4", 00:16:12.323 "traddr": "192.168.100.8", 00:16:12.323 "trsvcid": "39762" 00:16:12.323 }, 00:16:12.323 "auth": { 00:16:12.323 "state": "completed", 00:16:12.323 "digest": "sha512", 00:16:12.323 "dhgroup": "ffdhe8192" 00:16:12.323 } 00:16:12.323 } 00:16:12.323 ]' 00:16:12.323 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.582 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.582 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.582 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.582 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.582 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.582 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.582 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.841 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:16:12.841 18:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:16:13.409 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.409 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:13.409 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.409 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.409 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.409 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:13.409 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:13.409 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:13.409 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:13.409 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:13.409 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:13.668 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:13.668 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.668 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:13.668 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:13.668 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:13.668 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.668 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.668 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.668 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.669 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.669 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.669 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.669 18:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.236 00:16:14.236 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.236 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.236 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.494 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.494 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.494 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.494 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.494 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.494 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.494 { 00:16:14.494 "cntlid": 145, 00:16:14.494 "qid": 0, 00:16:14.494 "state": "enabled", 00:16:14.494 "thread": "nvmf_tgt_poll_group_000", 00:16:14.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:14.494 "listen_address": { 00:16:14.494 "trtype": "RDMA", 00:16:14.494 "adrfam": "IPv4", 00:16:14.494 "traddr": "192.168.100.8", 00:16:14.494 "trsvcid": "4420" 00:16:14.494 }, 00:16:14.494 "peer_address": { 00:16:14.494 "trtype": "RDMA", 00:16:14.494 "adrfam": "IPv4", 00:16:14.494 "traddr": "192.168.100.8", 00:16:14.494 "trsvcid": "49734" 00:16:14.494 }, 00:16:14.494 "auth": { 00:16:14.494 "state": "completed", 00:16:14.494 "digest": "sha512", 00:16:14.494 "dhgroup": "ffdhe8192" 00:16:14.494 } 00:16:14.494 } 00:16:14.494 ]' 00:16:14.494 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.494 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.494 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.494 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:14.494 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.494 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.494 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.494 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.753 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:16:14.753 18:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NjU1MTZlZWM1ODEzODQxOGNhNGIyOGY0YzExMDhjY2U1MzQ1MTkwY2M3MGU3ZTAzwF5i6g==: --dhchap-ctrl-secret DHHC-1:03:ZTc3YmM4NjQ4N2ZkY2YzYjAwYzcwMThhYzE4YTMxYTNhY2RlNzY5OTdjNTA2MzhlNGUwNzRhOTRhMGRlMTAzNw8AkLI=: 00:16:15.321 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.581 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:15.581 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.581 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.581 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.581 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:16:15.581 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.581 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.581 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.581 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:15.581 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:15.581 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:15.581 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:15.581 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:15.581 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:15.581 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:15.581 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:15.581 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:15.581 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:15.855 request: 00:16:15.855 { 00:16:15.855 "name": "nvme0", 00:16:15.855 "trtype": "rdma", 00:16:15.855 "traddr": "192.168.100.8", 00:16:15.855 "adrfam": "ipv4", 00:16:15.855 "trsvcid": "4420", 00:16:15.855 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:15.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:15.856 "prchk_reftag": false, 00:16:15.856 "prchk_guard": false, 00:16:15.856 "hdgst": false, 00:16:15.856 "ddgst": false, 00:16:15.856 "dhchap_key": "key2", 00:16:15.856 "allow_unrecognized_csi": false, 00:16:15.856 "method": "bdev_nvme_attach_controller", 00:16:15.856 "req_id": 1 00:16:15.856 } 00:16:15.856 Got JSON-RPC error response 00:16:15.856 response: 00:16:15.856 { 00:16:15.856 "code": -5, 00:16:15.856 "message": "Input/output error" 00:16:15.856 } 00:16:15.856 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:15.856 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:15.856 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:15.856 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:15.856 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:15.856 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.856 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.856 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.856 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.856 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.856 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.856 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.856 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:15.856 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:15.856 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:15.856 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:15.856 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:15.856 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:16.114 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.114 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:16.114 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:16.114 18:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:16.373 request: 00:16:16.373 { 00:16:16.373 "name": "nvme0", 00:16:16.373 "trtype": "rdma", 00:16:16.373 "traddr": "192.168.100.8", 00:16:16.373 "adrfam": "ipv4", 00:16:16.373 "trsvcid": "4420", 00:16:16.373 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:16.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:16.373 "prchk_reftag": false, 00:16:16.373 "prchk_guard": false, 00:16:16.373 "hdgst": false, 00:16:16.373 "ddgst": false, 00:16:16.373 "dhchap_key": "key1", 00:16:16.373 "dhchap_ctrlr_key": "ckey2", 00:16:16.373 "allow_unrecognized_csi": false, 00:16:16.373 "method": "bdev_nvme_attach_controller", 00:16:16.373 "req_id": 1 00:16:16.373 } 00:16:16.373 Got JSON-RPC error response 00:16:16.373 response: 00:16:16.373 { 00:16:16.373 "code": -5, 00:16:16.373 "message": "Input/output error" 00:16:16.373 } 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.373 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.941 request: 00:16:16.941 { 00:16:16.941 "name": "nvme0", 00:16:16.941 "trtype": "rdma", 00:16:16.941 "traddr": "192.168.100.8", 00:16:16.941 "adrfam": "ipv4", 00:16:16.941 "trsvcid": "4420", 00:16:16.941 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:16.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:16.941 "prchk_reftag": false, 00:16:16.941 "prchk_guard": false, 00:16:16.941 "hdgst": false, 00:16:16.941 "ddgst": false, 00:16:16.941 "dhchap_key": "key1", 00:16:16.941 "dhchap_ctrlr_key": "ckey1", 00:16:16.941 "allow_unrecognized_csi": false, 00:16:16.941 "method": "bdev_nvme_attach_controller", 00:16:16.941 "req_id": 1 00:16:16.941 } 00:16:16.941 Got JSON-RPC error response 00:16:16.941 response: 00:16:16.941 { 00:16:16.941 "code": -5, 00:16:16.941 "message": "Input/output error" 00:16:16.941 } 00:16:16.941 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:16.941 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:16.941 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:16.941 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:16.941 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:16.941 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.941 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.941 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.941 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2324318 00:16:16.942 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2324318 ']' 00:16:16.942 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2324318 00:16:16.942 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:16.942 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.942 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2324318 00:16:16.942 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:16.942 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:16.942 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2324318' 00:16:16.942 killing process with pid 2324318 00:16:16.942 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2324318 00:16:16.942 18:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2324318 00:16:17.201 18:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:17.201 18:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:17.201 18:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:17.201 18:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.201 18:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2349336 00:16:17.201 18:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:17.201 18:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2349336 00:16:17.201 18:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2349336 ']' 00:16:17.201 18:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.201 18:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.201 18:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.201 18:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.201 18:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.141 18:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.141 18:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:18.141 18:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:18.141 18:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:18.141 18:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.141 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.141 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:18.141 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2349336 00:16:18.141 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2349336 ']' 00:16:18.141 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.141 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.142 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.142 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.142 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.401 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.401 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:18.401 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:18.401 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.401 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.401 null0 00:16:18.401 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.401 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.mqJ 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.mq7 ]] 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mq7 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.fr7 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.SA8 ]] 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SA8 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.BTU 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.UOG ]] 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UOG 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.TfU 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.660 18:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.228 nvme0n1 00:16:19.228 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.228 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.228 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.487 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.487 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.487 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.487 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.487 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.487 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.487 { 00:16:19.487 "cntlid": 1, 00:16:19.487 "qid": 0, 00:16:19.487 "state": "enabled", 00:16:19.487 "thread": "nvmf_tgt_poll_group_000", 00:16:19.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:19.487 "listen_address": { 00:16:19.487 "trtype": "RDMA", 00:16:19.487 "adrfam": "IPv4", 00:16:19.487 "traddr": "192.168.100.8", 00:16:19.487 "trsvcid": "4420" 00:16:19.487 }, 00:16:19.487 "peer_address": { 00:16:19.487 "trtype": "RDMA", 00:16:19.487 "adrfam": "IPv4", 00:16:19.487 "traddr": "192.168.100.8", 00:16:19.487 "trsvcid": "52739" 00:16:19.487 }, 00:16:19.487 "auth": { 00:16:19.487 "state": "completed", 00:16:19.487 "digest": "sha512", 00:16:19.487 "dhgroup": "ffdhe8192" 00:16:19.487 } 00:16:19.487 } 00:16:19.487 ]' 00:16:19.487 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.487 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.487 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.746 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:19.746 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.746 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.746 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.746 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.004 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:16:20.004 18:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:16:20.573 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.573 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:20.573 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.573 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.573 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.573 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:16:20.573 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.573 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.573 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.573 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:20.573 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:20.832 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:20.832 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:20.832 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:20.832 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:20.832 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.832 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:20.832 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.832 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:20.832 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.832 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.090 request: 00:16:21.090 { 00:16:21.090 "name": "nvme0", 00:16:21.090 "trtype": "rdma", 00:16:21.090 "traddr": "192.168.100.8", 00:16:21.090 "adrfam": "ipv4", 00:16:21.090 "trsvcid": "4420", 00:16:21.090 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:21.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:21.090 "prchk_reftag": false, 00:16:21.090 "prchk_guard": false, 00:16:21.091 "hdgst": false, 00:16:21.091 "ddgst": false, 00:16:21.091 "dhchap_key": "key3", 00:16:21.091 "allow_unrecognized_csi": false, 00:16:21.091 "method": "bdev_nvme_attach_controller", 00:16:21.091 "req_id": 1 00:16:21.091 } 00:16:21.091 Got JSON-RPC error response 00:16:21.091 response: 00:16:21.091 { 00:16:21.091 "code": -5, 00:16:21.091 "message": "Input/output error" 00:16:21.091 } 00:16:21.091 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:21.091 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:21.091 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:21.091 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:21.091 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:21.091 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:21.091 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:21.091 18:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:21.349 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:21.349 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:21.349 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:21.349 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:21.349 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.349 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:21.349 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.349 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:21.349 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.349 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.608 request: 00:16:21.608 { 00:16:21.608 "name": "nvme0", 00:16:21.608 "trtype": "rdma", 00:16:21.608 "traddr": "192.168.100.8", 00:16:21.608 "adrfam": "ipv4", 00:16:21.608 "trsvcid": "4420", 00:16:21.608 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:21.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:21.608 "prchk_reftag": false, 00:16:21.608 "prchk_guard": false, 00:16:21.608 "hdgst": false, 00:16:21.608 "ddgst": false, 00:16:21.608 "dhchap_key": "key3", 00:16:21.608 "allow_unrecognized_csi": false, 00:16:21.608 "method": "bdev_nvme_attach_controller", 00:16:21.608 "req_id": 1 00:16:21.608 } 00:16:21.608 Got JSON-RPC error response 00:16:21.608 response: 00:16:21.608 { 00:16:21.608 "code": -5, 00:16:21.608 "message": "Input/output error" 00:16:21.608 } 00:16:21.608 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:21.608 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:21.608 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:21.608 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:21.608 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:21.608 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:21.608 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:21.608 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:21.608 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:21.608 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:21.867 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:21.867 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.867 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.867 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.867 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:21.867 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.867 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.867 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.867 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:21.867 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:21.867 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:21.867 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:21.867 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.867 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:21.867 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.867 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:21.867 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:21.867 18:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:22.130 request: 00:16:22.130 { 00:16:22.130 "name": "nvme0", 00:16:22.130 "trtype": "rdma", 00:16:22.130 "traddr": "192.168.100.8", 00:16:22.130 "adrfam": "ipv4", 00:16:22.130 "trsvcid": "4420", 00:16:22.130 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:22.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:22.130 "prchk_reftag": false, 00:16:22.130 "prchk_guard": false, 00:16:22.130 "hdgst": false, 00:16:22.130 "ddgst": false, 00:16:22.130 "dhchap_key": "key0", 00:16:22.130 "dhchap_ctrlr_key": "key1", 00:16:22.130 "allow_unrecognized_csi": false, 00:16:22.130 "method": "bdev_nvme_attach_controller", 00:16:22.130 "req_id": 1 00:16:22.130 } 00:16:22.130 Got JSON-RPC error response 00:16:22.130 response: 00:16:22.130 { 00:16:22.130 "code": -5, 00:16:22.130 "message": "Input/output error" 00:16:22.130 } 00:16:22.130 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:22.130 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:22.130 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:22.130 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:22.130 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:22.130 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:22.130 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:22.388 nvme0n1 00:16:22.388 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:22.388 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:22.388 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.648 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.648 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.648 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.906 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:16:22.906 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.906 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.906 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.906 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:22.906 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:22.906 18:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:23.474 nvme0n1 00:16:23.474 18:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:23.474 18:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:23.474 18:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.733 18:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.733 18:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:23.733 18:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.733 18:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.733 18:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.733 18:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:23.733 18:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.733 18:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:23.991 18:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.991 18:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:16:23.992 18:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: --dhchap-ctrl-secret DHHC-1:03:MjZiYTc3ZWU1MGEyNDY2NWJjYTlhYTFmMDc3Yjk3ZDNiYmUzZDIwOTlmZmI3ZmM2YjQwMmJlYWQ4OWI3OTYxYkB9NzQ=: 00:16:24.559 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:24.559 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:24.559 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:24.559 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:24.559 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:24.559 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:24.559 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:24.559 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.559 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.818 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:24.818 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:24.818 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:24.818 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:24.818 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:24.818 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:24.818 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:24.818 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:24.818 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:24.818 18:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:25.389 request: 00:16:25.389 { 00:16:25.389 "name": "nvme0", 00:16:25.389 "trtype": "rdma", 00:16:25.389 "traddr": "192.168.100.8", 00:16:25.389 "adrfam": "ipv4", 00:16:25.389 "trsvcid": "4420", 00:16:25.389 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:25.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:16:25.389 "prchk_reftag": false, 00:16:25.389 "prchk_guard": false, 00:16:25.389 "hdgst": false, 00:16:25.389 "ddgst": false, 00:16:25.389 "dhchap_key": "key1", 00:16:25.389 "allow_unrecognized_csi": false, 00:16:25.389 "method": "bdev_nvme_attach_controller", 00:16:25.389 "req_id": 1 00:16:25.389 } 00:16:25.389 Got JSON-RPC error response 00:16:25.389 response: 00:16:25.389 { 00:16:25.389 "code": -5, 00:16:25.389 "message": "Input/output error" 00:16:25.389 } 00:16:25.389 18:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:25.389 18:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:25.389 18:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:25.389 18:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:25.389 18:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:25.389 18:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:25.389 18:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:25.983 nvme0n1 00:16:25.983 18:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:16:25.983 18:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:16:25.983 18:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.270 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.270 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.270 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.529 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:26.529 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.530 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.530 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.530 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:16:26.530 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:26.530 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:26.789 nvme0n1 00:16:26.789 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:16:26.789 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:16:26.789 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.789 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.789 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.789 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.048 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:27.048 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.048 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.048 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.048 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: '' 2s 00:16:27.048 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:27.048 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:27.048 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: 00:16:27.048 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:16:27.048 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:27.048 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:27.048 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: ]] 00:16:27.048 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Y2FhNjZiZjQ1ZWQ5ZjE2OGMyMDJmM2E2NzljZTllNDnqH6uQ: 00:16:27.048 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:16:27.048 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:27.048 18:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key2 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: 2s 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: ]] 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NmI2YTM3MTNjNzdjMjZhYzZlODQwOGUwM2RmYWFmNzFhODUzYzNkMTM1ODJiMDkykmNjdg==: 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:29.585 18:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:31.491 18:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:16:31.491 18:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:31.491 18:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:31.491 18:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:31.491 18:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:31.491 18:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:31.492 18:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:31.492 18:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.492 18:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:31.492 18:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.492 18:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.492 18:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.492 18:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:31.492 18:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:31.492 18:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:32.060 nvme0n1 00:16:32.060 18:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:32.060 18:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.060 18:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.060 18:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.060 18:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:32.060 18:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:32.628 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:16:32.628 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:16:32.629 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.629 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.629 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:32.629 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.629 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.629 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.629 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:16:32.629 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:16:32.888 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:16:32.888 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:16:32.888 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.147 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.147 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:33.147 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.147 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.147 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.147 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:33.147 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:33.148 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:33.148 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:33.148 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.148 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:33.148 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.148 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:33.148 18:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:33.407 request: 00:16:33.407 { 00:16:33.407 "name": "nvme0", 00:16:33.407 "dhchap_key": "key1", 00:16:33.407 "dhchap_ctrlr_key": "key3", 00:16:33.407 "method": "bdev_nvme_set_keys", 00:16:33.407 "req_id": 1 00:16:33.407 } 00:16:33.407 Got JSON-RPC error response 00:16:33.407 response: 00:16:33.407 { 00:16:33.407 "code": -13, 00:16:33.407 "message": "Permission denied" 00:16:33.407 } 00:16:33.666 18:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:33.666 18:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:33.666 18:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:33.666 18:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:33.666 18:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:33.666 18:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:33.666 18:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.666 18:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:16:33.666 18:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:16:35.046 18:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:35.046 18:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:35.046 18:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.046 18:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:16:35.046 18:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:35.046 18:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.046 18:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.046 18:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.046 18:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:35.046 18:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:35.046 18:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:35.615 nvme0n1 00:16:35.615 18:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:35.615 18:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.615 18:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.615 18:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.615 18:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:35.615 18:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:35.615 18:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:35.615 18:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:35.615 18:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.615 18:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:35.615 18:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.615 18:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:35.615 18:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:36.183 request: 00:16:36.183 { 00:16:36.183 "name": "nvme0", 00:16:36.183 "dhchap_key": "key2", 00:16:36.183 "dhchap_ctrlr_key": "key0", 00:16:36.183 "method": "bdev_nvme_set_keys", 00:16:36.183 "req_id": 1 00:16:36.183 } 00:16:36.183 Got JSON-RPC error response 00:16:36.183 response: 00:16:36.183 { 00:16:36.183 "code": -13, 00:16:36.183 "message": "Permission denied" 00:16:36.183 } 00:16:36.184 18:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:36.184 18:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:36.184 18:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:36.184 18:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:36.184 18:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:36.184 18:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:36.184 18:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.443 18:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:16:36.443 18:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:16:37.381 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:37.381 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:37.381 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.641 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:16:37.641 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:16:37.641 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:16:37.641 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2324532 00:16:37.641 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2324532 ']' 00:16:37.641 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2324532 00:16:37.641 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:37.641 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.641 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2324532 00:16:37.641 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:37.641 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:37.641 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2324532' 00:16:37.641 killing process with pid 2324532 00:16:37.641 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2324532 00:16:37.641 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2324532 00:16:37.901 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:37.901 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:37.901 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:16:37.901 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:37.901 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:37.901 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:16:37.901 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:37.901 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:37.901 rmmod nvme_rdma 00:16:37.901 rmmod nvme_fabrics 00:16:37.901 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:37.901 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:16:37.901 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:16:37.901 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2349336 ']' 00:16:37.901 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2349336 00:16:37.901 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2349336 ']' 00:16:37.901 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2349336 00:16:37.901 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:37.901 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.901 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2349336 00:16:38.161 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.161 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.161 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2349336' 00:16:38.161 killing process with pid 2349336 00:16:38.161 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2349336 00:16:38.161 18:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2349336 00:16:38.161 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:38.161 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:38.161 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.mqJ /tmp/spdk.key-sha256.fr7 /tmp/spdk.key-sha384.BTU /tmp/spdk.key-sha512.TfU /tmp/spdk.key-sha512.mq7 /tmp/spdk.key-sha384.SA8 /tmp/spdk.key-sha256.UOG '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:16:38.161 00:16:38.161 real 2m45.224s 00:16:38.161 user 6m17.968s 00:16:38.161 sys 0m24.862s 00:16:38.161 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.161 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.161 ************************************ 00:16:38.161 END TEST nvmf_auth_target 00:16:38.161 ************************************ 00:16:38.421 18:04:46 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:16:38.421 18:04:46 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:16:38.421 18:04:46 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:16:38.421 18:04:46 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:16:38.421 18:04:46 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:16:38.421 18:04:46 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:16:38.421 18:04:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:38.421 18:04:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.422 18:04:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:38.422 ************************************ 00:16:38.422 START TEST nvmf_srq_overwhelm 00:16:38.422 ************************************ 00:16:38.422 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:16:38.422 * Looking for test storage... 00:16:38.422 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:38.422 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:38.422 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # lcov --version 00:16:38.422 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:38.422 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.682 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:38.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.683 --rc genhtml_branch_coverage=1 00:16:38.683 --rc genhtml_function_coverage=1 00:16:38.683 --rc genhtml_legend=1 00:16:38.683 --rc geninfo_all_blocks=1 00:16:38.683 --rc geninfo_unexecuted_blocks=1 00:16:38.683 00:16:38.683 ' 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:38.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.683 --rc genhtml_branch_coverage=1 00:16:38.683 --rc genhtml_function_coverage=1 00:16:38.683 --rc genhtml_legend=1 00:16:38.683 --rc geninfo_all_blocks=1 00:16:38.683 --rc geninfo_unexecuted_blocks=1 00:16:38.683 00:16:38.683 ' 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:38.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.683 --rc genhtml_branch_coverage=1 00:16:38.683 --rc genhtml_function_coverage=1 00:16:38.683 --rc genhtml_legend=1 00:16:38.683 --rc geninfo_all_blocks=1 00:16:38.683 --rc geninfo_unexecuted_blocks=1 00:16:38.683 00:16:38.683 ' 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:38.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.683 --rc genhtml_branch_coverage=1 00:16:38.683 --rc genhtml_function_coverage=1 00:16:38.683 --rc genhtml_legend=1 00:16:38.683 --rc geninfo_all_blocks=1 00:16:38.683 --rc geninfo_unexecuted_blocks=1 00:16:38.683 00:16:38.683 ' 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:38.683 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:16:38.683 18:04:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:46.813 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:46.814 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:46.814 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:46.814 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:46.814 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:46.814 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:46.814 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:46.814 altname enp217s0f0np0 00:16:46.814 altname ens818f0np0 00:16:46.814 inet 192.168.100.8/24 scope global mlx_0_0 00:16:46.814 valid_lft forever preferred_lft forever 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:46.814 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:46.815 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:46.815 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:46.815 altname enp217s0f1np1 00:16:46.815 altname ens818f1np1 00:16:46.815 inet 192.168.100.9/24 scope global mlx_0_1 00:16:46.815 valid_lft forever preferred_lft forever 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:46.815 192.168.100.9' 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:46.815 192.168.100.9' 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:46.815 192.168.100.9' 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=2356350 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 2356350 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # '[' -z 2356350 ']' 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.815 18:04:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:46.815 [2024-12-09 18:04:53.772548] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:16:46.815 [2024-12-09 18:04:53.772606] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.815 [2024-12-09 18:04:53.863309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.815 [2024-12-09 18:04:53.905205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.815 [2024-12-09 18:04:53.905248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.815 [2024-12-09 18:04:53.905258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.815 [2024-12-09 18:04:53.905266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.815 [2024-12-09 18:04:53.905273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.815 [2024-12-09 18:04:53.907020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.815 [2024-12-09 18:04:53.907144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.815 [2024-12-09 18:04:53.907258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.815 [2024-12-09 18:04:53.907259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.815 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.815 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@868 -- # return 0 00:16:46.815 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:46.815 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:46.815 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:46.815 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.815 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:16:46.815 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.815 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:46.815 [2024-12-09 18:04:54.676572] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ed5980/0x1ed9e70) succeed. 00:16:46.815 [2024-12-09 18:04:54.685921] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ed7010/0x1f1b510) succeed. 00:16:46.815 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.815 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:16:46.815 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:46.816 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:16:46.816 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.816 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:46.816 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.816 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:46.816 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.816 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:46.816 Malloc0 00:16:46.816 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.816 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:16:46.816 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.816 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:47.074 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.074 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:47.074 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.074 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:47.074 [2024-12-09 18:04:54.805917] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:47.074 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.074 18:04:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:48.008 Malloc1 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.008 18:04:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme1n1 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:48.943 Malloc2 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.943 18:04:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme2n1 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:50.318 Malloc3 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.318 18:04:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:16:51.254 18:04:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:16:51.254 18:04:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:16:51.254 18:04:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:51.254 18:04:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1 00:16:51.254 18:04:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme3n1 00:16:51.254 18:04:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:51.254 18:04:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:16:51.254 18:04:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:51.254 18:04:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:51.254 18:04:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.254 18:04:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:51.254 18:04:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.254 18:04:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:16:51.254 18:04:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.254 18:04:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:51.254 Malloc4 00:16:51.254 18:04:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.254 18:04:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:16:51.254 18:04:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.254 18:04:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:51.254 18:04:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.254 18:04:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:16:51.254 18:04:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.254 18:04:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:51.254 18:04:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.254 18:04:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:16:52.190 18:04:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:16:52.190 18:04:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme4n1 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:52.190 Malloc5 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.190 18:05:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:16:53.125 18:05:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:16:53.125 18:05:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:16:53.125 18:05:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1 00:16:53.125 18:05:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:53.125 18:05:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:53.125 18:05:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme5n1 00:16:53.125 18:05:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:16:53.125 18:05:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:16:53.383 [global] 00:16:53.383 thread=1 00:16:53.383 invalidate=1 00:16:53.383 rw=read 00:16:53.383 time_based=1 00:16:53.383 runtime=10 00:16:53.383 ioengine=libaio 00:16:53.383 direct=1 00:16:53.383 bs=1048576 00:16:53.383 iodepth=128 00:16:53.383 norandommap=1 00:16:53.383 numjobs=13 00:16:53.383 00:16:53.383 [job0] 00:16:53.383 filename=/dev/nvme0n1 00:16:53.383 [job1] 00:16:53.383 filename=/dev/nvme1n1 00:16:53.383 [job2] 00:16:53.383 filename=/dev/nvme2n1 00:16:53.383 [job3] 00:16:53.383 filename=/dev/nvme3n1 00:16:53.383 [job4] 00:16:53.383 filename=/dev/nvme4n1 00:16:53.383 [job5] 00:16:53.383 filename=/dev/nvme5n1 00:16:53.383 Could not set queue depth (nvme0n1) 00:16:53.383 Could not set queue depth (nvme1n1) 00:16:53.383 Could not set queue depth (nvme2n1) 00:16:53.383 Could not set queue depth (nvme3n1) 00:16:53.383 Could not set queue depth (nvme4n1) 00:16:53.383 Could not set queue depth (nvme5n1) 00:16:53.642 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:53.642 ... 00:16:53.642 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:53.642 ... 00:16:53.642 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:53.642 ... 00:16:53.642 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:53.642 ... 00:16:53.642 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:53.642 ... 00:16:53.642 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:53.642 ... 00:16:53.642 fio-3.35 00:16:53.642 Starting 78 threads 00:17:08.521 00:17:08.521 job0: (groupid=0, jobs=1): err= 0: pid=2357811: Mon Dec 9 18:05:13 2024 00:17:08.521 read: IOPS=67, BW=67.1MiB/s (70.4MB/s)(805MiB/11998msec) 00:17:08.521 slat (usec): min=45, max=2138.6k, avg=14806.75, stdev=147499.01 00:17:08.521 clat (msec): min=70, max=9014, avg=1811.49, stdev=2974.02 00:17:08.521 lat (msec): min=401, max=9019, avg=1826.30, stdev=2982.69 00:17:08.521 clat percentiles (msec): 00:17:08.521 | 1.00th=[ 401], 5.00th=[ 405], 10.00th=[ 409], 20.00th=[ 426], 00:17:08.521 | 30.00th=[ 435], 40.00th=[ 443], 50.00th=[ 447], 60.00th=[ 481], 00:17:08.521 | 70.00th=[ 617], 80.00th=[ 852], 90.00th=[ 8792], 95.00th=[ 8926], 00:17:08.521 | 99.00th=[ 8926], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:17:08.521 | 99.99th=[ 9060] 00:17:08.521 bw ( KiB/s): min= 4807, max=311296, per=4.61%, avg=153904.22, stdev=140062.97, samples=9 00:17:08.521 iops : min= 4, max= 304, avg=150.11, stdev=137.00, samples=9 00:17:08.521 lat (msec) : 100=0.12%, 500=59.88%, 750=15.28%, 1000=7.58%, >=2000=17.14% 00:17:08.521 cpu : usr=0.04%, sys=1.67%, ctx=785, majf=0, minf=32769 00:17:08.522 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:17:08.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.522 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.522 issued rwts: total=805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.522 job0: (groupid=0, jobs=1): err= 0: pid=2357812: Mon Dec 9 18:05:13 2024 00:17:08.522 read: IOPS=1, BW=1875KiB/s (1920kB/s)(22.0MiB/12015msec) 00:17:08.522 slat (msec): min=9, max=2115, avg=543.41, stdev=884.75 00:17:08.522 clat (msec): min=59, max=11939, avg=6843.03, stdev=3541.52 00:17:08.522 lat (msec): min=2111, max=12014, avg=7386.44, stdev=3363.75 00:17:08.522 clat percentiles (msec): 00:17:08.522 | 1.00th=[ 59], 5.00th=[ 2106], 10.00th=[ 2140], 20.00th=[ 4279], 00:17:08.522 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8557], 00:17:08.522 | 70.00th=[ 8658], 80.00th=[10671], 90.00th=[10805], 95.00th=[11879], 00:17:08.522 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:17:08.522 | 99.99th=[11879] 00:17:08.522 lat (msec) : 100=4.55%, >=2000=95.45% 00:17:08.522 cpu : usr=0.01%, sys=0.18%, ctx=60, majf=0, minf=5633 00:17:08.522 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:17:08.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.522 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:08.522 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.522 job0: (groupid=0, jobs=1): err= 0: pid=2357813: Mon Dec 9 18:05:13 2024 00:17:08.522 read: IOPS=2, BW=2136KiB/s (2187kB/s)(25.0MiB/11984msec) 00:17:08.522 slat (msec): min=7, max=2103, avg=477.30, stdev=848.56 00:17:08.522 clat (msec): min=50, max=11924, avg=6796.90, stdev=3368.44 00:17:08.522 lat (msec): min=2119, max=11983, avg=7274.20, stdev=3214.57 00:17:08.522 clat percentiles (msec): 00:17:08.522 | 1.00th=[ 51], 5.00th=[ 2123], 10.00th=[ 2140], 20.00th=[ 4279], 00:17:08.522 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8557], 00:17:08.522 | 70.00th=[ 8658], 80.00th=[10671], 90.00th=[10805], 95.00th=[11879], 00:17:08.522 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:17:08.522 | 99.99th=[11879] 00:17:08.522 lat (msec) : 100=4.00%, >=2000=96.00% 00:17:08.522 cpu : usr=0.01%, sys=0.22%, ctx=61, majf=0, minf=6401 00:17:08.522 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:17:08.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.522 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:08.522 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.522 job0: (groupid=0, jobs=1): err= 0: pid=2357814: Mon Dec 9 18:05:13 2024 00:17:08.522 read: IOPS=3, BW=3422KiB/s (3504kB/s)(40.0MiB/11970msec) 00:17:08.522 slat (usec): min=678, max=2120.6k, avg=298140.42, stdev=705632.48 00:17:08.522 clat (msec): min=43, max=11948, avg=6979.18, stdev=4249.40 00:17:08.522 lat (msec): min=2086, max=11969, avg=7277.32, stdev=4167.88 00:17:08.522 clat percentiles (msec): 00:17:08.522 | 1.00th=[ 44], 5.00th=[ 2089], 10.00th=[ 2106], 20.00th=[ 2165], 00:17:08.522 | 30.00th=[ 2198], 40.00th=[ 4279], 50.00th=[ 6477], 60.00th=[ 8658], 00:17:08.522 | 70.00th=[10805], 80.00th=[11879], 90.00th=[11879], 95.00th=[11879], 00:17:08.522 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:08.522 | 99.99th=[12013] 00:17:08.522 lat (msec) : 50=2.50%, >=2000=97.50% 00:17:08.522 cpu : usr=0.00%, sys=0.24%, ctx=80, majf=0, minf=10241 00:17:08.522 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:17:08.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.522 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:08.522 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.522 job0: (groupid=0, jobs=1): err= 0: pid=2357815: Mon Dec 9 18:05:13 2024 00:17:08.522 read: IOPS=79, BW=79.7MiB/s (83.5MB/s)(804MiB/10092msec) 00:17:08.522 slat (usec): min=46, max=2138.6k, avg=12433.91, stdev=128156.12 00:17:08.522 clat (msec): min=88, max=7047, avg=1540.31, stdev=2196.27 00:17:08.522 lat (msec): min=91, max=7051, avg=1552.74, stdev=2202.85 00:17:08.522 clat percentiles (msec): 00:17:08.522 | 1.00th=[ 201], 5.00th=[ 502], 10.00th=[ 502], 20.00th=[ 506], 00:17:08.522 | 30.00th=[ 506], 40.00th=[ 514], 50.00th=[ 518], 60.00th=[ 625], 00:17:08.522 | 70.00th=[ 651], 80.00th=[ 693], 90.00th=[ 6745], 95.00th=[ 6879], 00:17:08.522 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7080], 99.95th=[ 7080], 00:17:08.522 | 99.99th=[ 7080] 00:17:08.522 bw ( KiB/s): min=14336, max=258048, per=4.60%, avg=153749.00, stdev=105725.54, samples=9 00:17:08.522 iops : min= 14, max= 252, avg=150.00, stdev=103.31, samples=9 00:17:08.522 lat (msec) : 100=0.37%, 250=1.24%, 500=2.86%, 750=77.11%, >=2000=18.41% 00:17:08.522 cpu : usr=0.05%, sys=1.81%, ctx=727, majf=0, minf=32769 00:17:08.522 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:17:08.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.522 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.522 issued rwts: total=804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.522 job0: (groupid=0, jobs=1): err= 0: pid=2357816: Mon Dec 9 18:05:13 2024 00:17:08.522 read: IOPS=5, BW=6139KiB/s (6287kB/s)(72.0MiB/12009msec) 00:17:08.522 slat (usec): min=639, max=2105.4k, avg=165876.21, stdev=543874.78 00:17:08.522 clat (msec): min=65, max=12007, avg=8338.21, stdev=3624.42 00:17:08.522 lat (msec): min=2096, max=12008, avg=8504.09, stdev=3512.03 00:17:08.522 clat percentiles (msec): 00:17:08.522 | 1.00th=[ 66], 5.00th=[ 2106], 10.00th=[ 2140], 20.00th=[ 4279], 00:17:08.522 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[10671], 00:17:08.522 | 70.00th=[11879], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:17:08.522 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:08.522 | 99.99th=[12013] 00:17:08.522 lat (msec) : 100=1.39%, >=2000=98.61% 00:17:08.522 cpu : usr=0.00%, sys=0.61%, ctx=65, majf=0, minf=18433 00:17:08.522 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.2%, 32=44.4%, >=64=12.5% 00:17:08.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.522 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:08.522 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.522 job0: (groupid=0, jobs=1): err= 0: pid=2357817: Mon Dec 9 18:05:13 2024 00:17:08.522 read: IOPS=3, BW=3403KiB/s (3485kB/s)(40.0MiB/12037msec) 00:17:08.522 slat (usec): min=811, max=2128.1k, avg=299657.66, stdev=708663.71 00:17:08.522 clat (msec): min=50, max=12035, avg=9384.08, stdev=3843.55 00:17:08.522 lat (msec): min=2112, max=12036, avg=9683.74, stdev=3553.51 00:17:08.522 clat percentiles (msec): 00:17:08.522 | 1.00th=[ 51], 5.00th=[ 2106], 10.00th=[ 2140], 20.00th=[ 4329], 00:17:08.522 | 30.00th=[ 6477], 40.00th=[10805], 50.00th=[11879], 60.00th=[11879], 00:17:08.522 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:17:08.522 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:08.522 | 99.99th=[12013] 00:17:08.522 lat (msec) : 100=2.50%, >=2000=97.50% 00:17:08.522 cpu : usr=0.00%, sys=0.28%, ctx=84, majf=0, minf=10241 00:17:08.522 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:17:08.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.522 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:08.522 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.522 job0: (groupid=0, jobs=1): err= 0: pid=2357818: Mon Dec 9 18:05:13 2024 00:17:08.522 read: IOPS=2, BW=2972KiB/s (3043kB/s)(35.0MiB/12059msec) 00:17:08.522 slat (usec): min=870, max=2126.4k, avg=343135.28, stdev=749402.28 00:17:08.522 clat (msec): min=48, max=12052, avg=9064.69, stdev=3776.24 00:17:08.522 lat (msec): min=2119, max=12058, avg=9407.82, stdev=3465.72 00:17:08.522 clat percentiles (msec): 00:17:08.522 | 1.00th=[ 48], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4329], 00:17:08.522 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[11879], 60.00th=[12013], 00:17:08.522 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:17:08.522 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:08.522 | 99.99th=[12013] 00:17:08.522 lat (msec) : 50=2.86%, >=2000=97.14% 00:17:08.522 cpu : usr=0.00%, sys=0.28%, ctx=66, majf=0, minf=8961 00:17:08.522 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:17:08.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.522 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:08.522 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.522 job0: (groupid=0, jobs=1): err= 0: pid=2357819: Mon Dec 9 18:05:13 2024 00:17:08.522 read: IOPS=247, BW=248MiB/s (260MB/s)(2492MiB/10052msec) 00:17:08.522 slat (usec): min=39, max=2090.3k, avg=4009.08, stdev=63262.45 00:17:08.522 clat (msec): min=49, max=4691, avg=437.59, stdev=956.80 00:17:08.522 lat (msec): min=51, max=4692, avg=441.60, stdev=961.71 00:17:08.522 clat percentiles (msec): 00:17:08.522 | 1.00th=[ 126], 5.00th=[ 127], 10.00th=[ 127], 20.00th=[ 128], 00:17:08.522 | 30.00th=[ 128], 40.00th=[ 129], 50.00th=[ 129], 60.00th=[ 130], 00:17:08.522 | 70.00th=[ 180], 80.00th=[ 498], 90.00th=[ 502], 95.00th=[ 2366], 00:17:08.522 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:17:08.522 | 99.99th=[ 4665] 00:17:08.522 bw ( KiB/s): min= 8192, max=1021952, per=14.25%, avg=476160.00, stdev=420844.21, samples=10 00:17:08.522 iops : min= 8, max= 998, avg=465.00, stdev=410.98, samples=10 00:17:08.522 lat (msec) : 50=0.04%, 100=0.32%, 250=72.59%, 500=15.17%, 750=6.18% 00:17:08.522 lat (msec) : 2000=0.04%, >=2000=5.66% 00:17:08.522 cpu : usr=0.16%, sys=2.64%, ctx=2258, majf=0, minf=32769 00:17:08.522 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:17:08.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.522 issued rwts: total=2492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.522 job0: (groupid=0, jobs=1): err= 0: pid=2357820: Mon Dec 9 18:05:13 2024 00:17:08.523 read: IOPS=23, BW=23.6MiB/s (24.7MB/s)(285MiB/12093msec) 00:17:08.523 slat (usec): min=76, max=2180.3k, avg=35091.27, stdev=251667.95 00:17:08.523 clat (msec): min=601, max=11333, avg=5265.36, stdev=4996.04 00:17:08.523 lat (msec): min=604, max=11334, avg=5300.45, stdev=5003.10 00:17:08.523 clat percentiles (msec): 00:17:08.523 | 1.00th=[ 617], 5.00th=[ 634], 10.00th=[ 634], 20.00th=[ 634], 00:17:08.523 | 30.00th=[ 642], 40.00th=[ 659], 50.00th=[ 701], 60.00th=[ 9194], 00:17:08.523 | 70.00th=[10939], 80.00th=[11073], 90.00th=[11208], 95.00th=[11208], 00:17:08.523 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:17:08.523 | 99.99th=[11342] 00:17:08.523 bw ( KiB/s): min= 2048, max=161792, per=1.61%, avg=53927.83, stdev=73760.44, samples=6 00:17:08.523 iops : min= 2, max= 158, avg=52.50, stdev=72.15, samples=6 00:17:08.523 lat (msec) : 750=50.88%, >=2000=49.12% 00:17:08.523 cpu : usr=0.04%, sys=1.34%, ctx=283, majf=0, minf=32769 00:17:08.523 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.6%, 32=11.2%, >=64=77.9% 00:17:08.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.523 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:17:08.523 issued rwts: total=285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.523 job0: (groupid=0, jobs=1): err= 0: pid=2357822: Mon Dec 9 18:05:13 2024 00:17:08.523 read: IOPS=1, BW=1878KiB/s (1923kB/s)(22.0MiB/11994msec) 00:17:08.523 slat (usec): min=928, max=4255.1k, avg=542727.87, stdev=1114337.86 00:17:08.523 clat (msec): min=53, max=11929, avg=6401.32, stdev=3293.34 00:17:08.523 lat (msec): min=2125, max=11993, avg=6944.05, stdev=3179.24 00:17:08.523 clat percentiles (msec): 00:17:08.523 | 1.00th=[ 54], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4329], 00:17:08.523 | 30.00th=[ 4329], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6409], 00:17:08.523 | 70.00th=[ 6477], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:17:08.523 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:17:08.523 | 99.99th=[11879] 00:17:08.523 lat (msec) : 100=4.55%, >=2000=95.45% 00:17:08.523 cpu : usr=0.00%, sys=0.15%, ctx=51, majf=0, minf=5633 00:17:08.523 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:17:08.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.523 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:08.523 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.523 job0: (groupid=0, jobs=1): err= 0: pid=2357823: Mon Dec 9 18:05:13 2024 00:17:08.523 read: IOPS=2, BW=2546KiB/s (2608kB/s)(30.0MiB/12064msec) 00:17:08.523 slat (usec): min=884, max=2141.9k, avg=400667.76, stdev=807177.50 00:17:08.523 clat (msec): min=43, max=12058, avg=9691.06, stdev=3818.44 00:17:08.523 lat (msec): min=2112, max=12063, avg=10091.73, stdev=3376.24 00:17:08.523 clat percentiles (msec): 00:17:08.523 | 1.00th=[ 44], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 6409], 00:17:08.523 | 30.00th=[ 8658], 40.00th=[11879], 50.00th=[12013], 60.00th=[12013], 00:17:08.523 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:17:08.523 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:08.523 | 99.99th=[12013] 00:17:08.523 lat (msec) : 50=3.33%, >=2000=96.67% 00:17:08.523 cpu : usr=0.00%, sys=0.26%, ctx=75, majf=0, minf=7681 00:17:08.523 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:17:08.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.523 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:08.523 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.523 job0: (groupid=0, jobs=1): err= 0: pid=2357824: Mon Dec 9 18:05:13 2024 00:17:08.523 read: IOPS=8, BW=8490KiB/s (8694kB/s)(99.0MiB/11940msec) 00:17:08.523 slat (usec): min=923, max=2141.8k, avg=120129.53, stdev=460047.31 00:17:08.523 clat (msec): min=46, max=11876, avg=10426.49, stdev=2613.07 00:17:08.523 lat (msec): min=2092, max=11939, avg=10546.62, stdev=2395.31 00:17:08.523 clat percentiles (msec): 00:17:08.523 | 1.00th=[ 47], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[11073], 00:17:08.523 | 30.00th=[11208], 40.00th=[11342], 50.00th=[11342], 60.00th=[11476], 00:17:08.523 | 70.00th=[11610], 80.00th=[11610], 90.00th=[11745], 95.00th=[11879], 00:17:08.523 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:17:08.523 | 99.99th=[11879] 00:17:08.523 lat (msec) : 50=1.01%, >=2000=98.99% 00:17:08.523 cpu : usr=0.00%, sys=0.75%, ctx=208, majf=0, minf=25345 00:17:08.523 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=8.1%, 16=16.2%, 32=32.3%, >=64=36.4% 00:17:08.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.523 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:08.523 issued rwts: total=99,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.523 job1: (groupid=0, jobs=1): err= 0: pid=2357845: Mon Dec 9 18:05:13 2024 00:17:08.523 read: IOPS=56, BW=56.8MiB/s (59.5MB/s)(683MiB/12028msec) 00:17:08.523 slat (usec): min=42, max=2133.4k, avg=17526.27, stdev=147185.97 00:17:08.523 clat (msec): min=53, max=8430, avg=2083.98, stdev=2883.77 00:17:08.523 lat (msec): min=462, max=8431, avg=2101.51, stdev=2890.68 00:17:08.523 clat percentiles (msec): 00:17:08.523 | 1.00th=[ 477], 5.00th=[ 502], 10.00th=[ 510], 20.00th=[ 518], 00:17:08.523 | 30.00th=[ 523], 40.00th=[ 550], 50.00th=[ 567], 60.00th=[ 693], 00:17:08.523 | 70.00th=[ 1183], 80.00th=[ 1955], 90.00th=[ 8221], 95.00th=[ 8288], 00:17:08.523 | 99.00th=[ 8423], 99.50th=[ 8423], 99.90th=[ 8423], 99.95th=[ 8423], 00:17:08.523 | 99.99th=[ 8423] 00:17:08.523 bw ( KiB/s): min= 2048, max=249856, per=3.78%, avg=126260.56, stdev=111500.66, samples=9 00:17:08.523 iops : min= 2, max= 244, avg=123.11, stdev=109.12, samples=9 00:17:08.523 lat (msec) : 100=0.15%, 500=4.54%, 750=56.81%, 1000=5.56%, 2000=13.18% 00:17:08.523 lat (msec) : >=2000=19.77% 00:17:08.523 cpu : usr=0.05%, sys=1.01%, ctx=971, majf=0, minf=32769 00:17:08.523 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:17:08.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.523 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:08.523 issued rwts: total=683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.523 job1: (groupid=0, jobs=1): err= 0: pid=2357846: Mon Dec 9 18:05:13 2024 00:17:08.523 read: IOPS=9, BW=9301KiB/s (9525kB/s)(109MiB/12000msec) 00:17:08.523 slat (usec): min=463, max=2134.9k, avg=109500.80, stdev=417212.99 00:17:08.523 clat (msec): min=63, max=11999, avg=6804.54, stdev=2466.99 00:17:08.523 lat (msec): min=2111, max=11999, avg=6914.04, stdev=2429.63 00:17:08.523 clat percentiles (msec): 00:17:08.523 | 1.00th=[ 2106], 5.00th=[ 4245], 10.00th=[ 5537], 20.00th=[ 5671], 00:17:08.523 | 30.00th=[ 5738], 40.00th=[ 5873], 50.00th=[ 6007], 60.00th=[ 6141], 00:17:08.523 | 70.00th=[ 6275], 80.00th=[ 8557], 90.00th=[12013], 95.00th=[12013], 00:17:08.523 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:08.523 | 99.99th=[12013] 00:17:08.523 lat (msec) : 100=0.92%, >=2000=99.08% 00:17:08.523 cpu : usr=0.00%, sys=0.56%, ctx=343, majf=0, minf=27905 00:17:08.523 IO depths : 1=0.9%, 2=1.8%, 4=3.7%, 8=7.3%, 16=14.7%, 32=29.4%, >=64=42.2% 00:17:08.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.523 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:08.523 issued rwts: total=109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.523 job1: (groupid=0, jobs=1): err= 0: pid=2357847: Mon Dec 9 18:05:13 2024 00:17:08.523 read: IOPS=38, BW=38.7MiB/s (40.6MB/s)(469MiB/12109msec) 00:17:08.523 slat (usec): min=42, max=2154.6k, avg=25679.96, stdev=200818.25 00:17:08.523 clat (msec): min=62, max=8557, avg=2340.91, stdev=2190.15 00:17:08.523 lat (msec): min=641, max=10686, avg=2366.59, stdev=2218.25 00:17:08.523 clat percentiles (msec): 00:17:08.523 | 1.00th=[ 642], 5.00th=[ 642], 10.00th=[ 642], 20.00th=[ 642], 00:17:08.523 | 30.00th=[ 642], 40.00th=[ 651], 50.00th=[ 651], 60.00th=[ 709], 00:17:08.523 | 70.00th=[ 4463], 80.00th=[ 4732], 90.00th=[ 6074], 95.00th=[ 6141], 00:17:08.523 | 99.00th=[ 6141], 99.50th=[ 6141], 99.90th=[ 8557], 99.95th=[ 8557], 00:17:08.523 | 99.99th=[ 8557] 00:17:08.523 bw ( KiB/s): min= 2048, max=215040, per=4.18%, avg=139627.80, stdev=85756.80, samples=5 00:17:08.523 iops : min= 2, max= 210, avg=136.20, stdev=83.80, samples=5 00:17:08.523 lat (msec) : 100=0.21%, 750=61.19%, >=2000=38.59% 00:17:08.523 cpu : usr=0.02%, sys=1.41%, ctx=420, majf=0, minf=32769 00:17:08.523 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.6% 00:17:08.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.523 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:08.523 issued rwts: total=469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.523 job1: (groupid=0, jobs=1): err= 0: pid=2357848: Mon Dec 9 18:05:13 2024 00:17:08.523 read: IOPS=172, BW=172MiB/s (181MB/s)(1737MiB/10071msec) 00:17:08.523 slat (usec): min=38, max=102478, avg=5780.79, stdev=7408.26 00:17:08.523 clat (msec): min=20, max=4217, avg=679.24, stdev=282.64 00:17:08.523 lat (msec): min=73, max=4272, avg=685.02, stdev=284.47 00:17:08.523 clat percentiles (msec): 00:17:08.523 | 1.00th=[ 199], 5.00th=[ 397], 10.00th=[ 401], 20.00th=[ 405], 00:17:08.523 | 30.00th=[ 451], 40.00th=[ 550], 50.00th=[ 684], 60.00th=[ 751], 00:17:08.523 | 70.00th=[ 802], 80.00th=[ 860], 90.00th=[ 1036], 95.00th=[ 1200], 00:17:08.523 | 99.00th=[ 1318], 99.50th=[ 1334], 99.90th=[ 4212], 99.95th=[ 4212], 00:17:08.523 | 99.99th=[ 4212] 00:17:08.523 bw ( KiB/s): min=10240, max=327680, per=5.45%, avg=182100.53, stdev=86127.19, samples=17 00:17:08.523 iops : min= 10, max= 320, avg=177.76, stdev=84.11, samples=17 00:17:08.523 lat (msec) : 50=0.06%, 100=0.75%, 250=0.35%, 500=30.69%, 750=27.98% 00:17:08.523 lat (msec) : 1000=27.81%, 2000=12.26%, >=2000=0.12% 00:17:08.523 cpu : usr=0.13%, sys=2.21%, ctx=2583, majf=0, minf=32769 00:17:08.523 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:17:08.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.523 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.523 issued rwts: total=1737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.523 job1: (groupid=0, jobs=1): err= 0: pid=2357849: Mon Dec 9 18:05:13 2024 00:17:08.523 read: IOPS=38, BW=38.1MiB/s (39.9MB/s)(455MiB/11949msec) 00:17:08.523 slat (usec): min=413, max=2106.8k, avg=26110.66, stdev=184419.76 00:17:08.524 clat (msec): min=64, max=9204, avg=3171.31, stdev=3065.53 00:17:08.524 lat (msec): min=886, max=9208, avg=3197.42, stdev=3072.00 00:17:08.524 clat percentiles (msec): 00:17:08.524 | 1.00th=[ 894], 5.00th=[ 953], 10.00th=[ 969], 20.00th=[ 1020], 00:17:08.524 | 30.00th=[ 1036], 40.00th=[ 1070], 50.00th=[ 1116], 60.00th=[ 1183], 00:17:08.524 | 70.00th=[ 4178], 80.00th=[ 6409], 90.00th=[ 8926], 95.00th=[ 9060], 00:17:08.524 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:17:08.524 | 99.99th=[ 9194] 00:17:08.524 bw ( KiB/s): min= 2048, max=143360, per=1.99%, avg=66634.70, stdev=54124.82, samples=10 00:17:08.524 iops : min= 2, max= 140, avg=65.00, stdev=52.93, samples=10 00:17:08.524 lat (msec) : 100=0.22%, 1000=13.41%, 2000=48.35%, >=2000=38.02% 00:17:08.524 cpu : usr=0.02%, sys=1.06%, ctx=941, majf=0, minf=32769 00:17:08.524 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.0%, >=64=86.2% 00:17:08.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.524 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:08.524 issued rwts: total=455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.524 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.524 job1: (groupid=0, jobs=1): err= 0: pid=2357851: Mon Dec 9 18:05:13 2024 00:17:08.524 read: IOPS=1, BW=1800KiB/s (1843kB/s)(21.0MiB/11947msec) 00:17:08.524 slat (usec): min=473, max=2098.4k, avg=564981.20, stdev=901671.29 00:17:08.524 clat (msec): min=81, max=11931, avg=7325.43, stdev=3677.46 00:17:08.524 lat (msec): min=2124, max=11946, avg=7890.41, stdev=3410.66 00:17:08.524 clat percentiles (msec): 00:17:08.524 | 1.00th=[ 82], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4279], 00:17:08.524 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[ 8658], 00:17:08.524 | 70.00th=[10671], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:17:08.524 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:17:08.524 | 99.99th=[11879] 00:17:08.524 lat (msec) : 100=4.76%, >=2000=95.24% 00:17:08.524 cpu : usr=0.01%, sys=0.13%, ctx=55, majf=0, minf=5377 00:17:08.524 IO depths : 1=4.8%, 2=9.5%, 4=19.0%, 8=38.1%, 16=28.6%, 32=0.0%, >=64=0.0% 00:17:08.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.524 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:08.524 issued rwts: total=21,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.524 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.524 job1: (groupid=0, jobs=1): err= 0: pid=2357852: Mon Dec 9 18:05:13 2024 00:17:08.524 read: IOPS=3, BW=3765KiB/s (3855kB/s)(44.0MiB/11967msec) 00:17:08.524 slat (usec): min=617, max=2133.6k, avg=227761.99, stdev=624910.59 00:17:08.524 clat (msec): min=1944, max=11965, avg=9134.29, stdev=3776.33 00:17:08.524 lat (msec): min=2013, max=11966, avg=9362.05, stdev=3632.09 00:17:08.524 clat percentiles (msec): 00:17:08.524 | 1.00th=[ 1938], 5.00th=[ 2022], 10.00th=[ 2039], 20.00th=[ 4212], 00:17:08.524 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[10671], 60.00th=[11879], 00:17:08.524 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:17:08.524 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:08.524 | 99.99th=[12013] 00:17:08.524 lat (msec) : 2000=2.27%, >=2000=97.73% 00:17:08.524 cpu : usr=0.00%, sys=0.29%, ctx=79, majf=0, minf=11265 00:17:08.524 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:17:08.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.524 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:08.524 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.524 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.524 job1: (groupid=0, jobs=1): err= 0: pid=2357853: Mon Dec 9 18:05:13 2024 00:17:08.524 read: IOPS=10, BW=10.0MiB/s (10.5MB/s)(101MiB/10052msec) 00:17:08.524 slat (usec): min=753, max=2096.6k, avg=99079.94, stdev=418098.92 00:17:08.524 clat (msec): min=43, max=10049, avg=4800.45, stdev=4103.40 00:17:08.524 lat (msec): min=52, max=10050, avg=4899.53, stdev=4108.21 00:17:08.524 clat percentiles (msec): 00:17:08.524 | 1.00th=[ 53], 5.00th=[ 103], 10.00th=[ 124], 20.00th=[ 218], 00:17:08.524 | 30.00th=[ 268], 40.00th=[ 2400], 50.00th=[ 4530], 60.00th=[ 6678], 00:17:08.524 | 70.00th=[ 8792], 80.00th=[ 9866], 90.00th=[10000], 95.00th=[10000], 00:17:08.524 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:17:08.524 | 99.99th=[10000] 00:17:08.524 lat (msec) : 50=0.99%, 100=2.97%, 250=23.76%, 500=5.94%, >=2000=66.34% 00:17:08.524 cpu : usr=0.01%, sys=0.96%, ctx=71, majf=0, minf=25857 00:17:08.524 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=7.9%, 16=15.8%, 32=31.7%, >=64=37.6% 00:17:08.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.524 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:08.524 issued rwts: total=101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.524 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.524 job1: (groupid=0, jobs=1): err= 0: pid=2357854: Mon Dec 9 18:05:13 2024 00:17:08.524 read: IOPS=23, BW=23.5MiB/s (24.6MB/s)(284MiB/12093msec) 00:17:08.524 slat (usec): min=75, max=2125.5k, avg=42366.01, stdev=274173.61 00:17:08.524 clat (msec): min=58, max=11251, avg=5249.28, stdev=4707.82 00:17:08.524 lat (msec): min=626, max=11251, avg=5291.64, stdev=4708.64 00:17:08.524 clat percentiles (msec): 00:17:08.524 | 1.00th=[ 625], 5.00th=[ 625], 10.00th=[ 634], 20.00th=[ 634], 00:17:08.524 | 30.00th=[ 659], 40.00th=[ 693], 50.00th=[ 4245], 60.00th=[ 6477], 00:17:08.524 | 70.00th=[10805], 80.00th=[10939], 90.00th=[11073], 95.00th=[11208], 00:17:08.524 | 99.00th=[11208], 99.50th=[11208], 99.90th=[11208], 99.95th=[11208], 00:17:08.524 | 99.99th=[11208] 00:17:08.524 bw ( KiB/s): min= 4096, max=137216, per=1.37%, avg=45638.71, stdev=59695.67, samples=7 00:17:08.524 iops : min= 4, max= 134, avg=44.43, stdev=58.40, samples=7 00:17:08.524 lat (msec) : 100=0.35%, 750=45.42%, >=2000=54.23% 00:17:08.524 cpu : usr=0.01%, sys=1.28%, ctx=272, majf=0, minf=32769 00:17:08.524 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.6%, 32=11.3%, >=64=77.8% 00:17:08.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.524 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:17:08.524 issued rwts: total=284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.524 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.524 job1: (groupid=0, jobs=1): err= 0: pid=2357855: Mon Dec 9 18:05:13 2024 00:17:08.524 read: IOPS=28, BW=28.5MiB/s (29.8MB/s)(341MiB/11983msec) 00:17:08.524 slat (usec): min=79, max=2094.9k, avg=34940.73, stdev=220798.30 00:17:08.524 clat (msec): min=66, max=10492, avg=4290.74, stdev=3870.71 00:17:08.524 lat (msec): min=785, max=10494, avg=4325.68, stdev=3875.72 00:17:08.524 clat percentiles (msec): 00:17:08.524 | 1.00th=[ 785], 5.00th=[ 810], 10.00th=[ 818], 20.00th=[ 835], 00:17:08.524 | 30.00th=[ 869], 40.00th=[ 877], 50.00th=[ 2366], 60.00th=[ 4463], 00:17:08.524 | 70.00th=[ 6275], 80.00th=[10000], 90.00th=[10268], 95.00th=[10402], 00:17:08.524 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:17:08.524 | 99.99th=[10537] 00:17:08.524 bw ( KiB/s): min= 4096, max=145408, per=1.45%, avg=48329.11, stdev=57770.84, samples=9 00:17:08.524 iops : min= 4, max= 142, avg=47.00, stdev=56.57, samples=9 00:17:08.524 lat (msec) : 100=0.29%, 1000=47.51%, >=2000=52.20% 00:17:08.524 cpu : usr=0.00%, sys=0.75%, ctx=637, majf=0, minf=32769 00:17:08.524 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.7%, 32=9.4%, >=64=81.5% 00:17:08.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.524 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:17:08.524 issued rwts: total=341,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.524 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.524 job1: (groupid=0, jobs=1): err= 0: pid=2357856: Mon Dec 9 18:05:13 2024 00:17:08.524 read: IOPS=2, BW=3053KiB/s (3126kB/s)(36.0MiB/12074msec) 00:17:08.524 slat (usec): min=1019, max=2157.1k, avg=333887.24, stdev=747396.76 00:17:08.524 clat (msec): min=53, max=12071, avg=9999.45, stdev=3519.54 00:17:08.524 lat (msec): min=2119, max=12073, avg=10333.34, stdev=3093.34 00:17:08.524 clat percentiles (msec): 00:17:08.524 | 1.00th=[ 54], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 8557], 00:17:08.524 | 30.00th=[10671], 40.00th=[10805], 50.00th=[12013], 60.00th=[12013], 00:17:08.524 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:17:08.524 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:08.524 | 99.99th=[12013] 00:17:08.524 lat (msec) : 100=2.78%, >=2000=97.22% 00:17:08.524 cpu : usr=0.00%, sys=0.35%, ctx=86, majf=0, minf=9217 00:17:08.524 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:17:08.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.524 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:08.524 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.524 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.524 job1: (groupid=0, jobs=1): err= 0: pid=2357857: Mon Dec 9 18:05:13 2024 00:17:08.524 read: IOPS=3, BW=3165KiB/s (3241kB/s)(37.0MiB/11971msec) 00:17:08.524 slat (usec): min=1041, max=2143.8k, avg=321665.17, stdev=726868.80 00:17:08.524 clat (msec): min=69, max=11948, avg=6263.87, stdev=3618.23 00:17:08.524 lat (msec): min=2110, max=11970, avg=6585.53, stdev=3581.05 00:17:08.524 clat percentiles (msec): 00:17:08.524 | 1.00th=[ 69], 5.00th=[ 2106], 10.00th=[ 2140], 20.00th=[ 2165], 00:17:08.524 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6477], 00:17:08.524 | 70.00th=[ 8557], 80.00th=[10805], 90.00th=[11879], 95.00th=[11879], 00:17:08.524 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:08.524 | 99.99th=[12013] 00:17:08.524 lat (msec) : 100=2.70%, >=2000=97.30% 00:17:08.524 cpu : usr=0.00%, sys=0.30%, ctx=60, majf=0, minf=9473 00:17:08.524 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:17:08.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.524 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:08.524 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.524 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.524 job1: (groupid=0, jobs=1): err= 0: pid=2357858: Mon Dec 9 18:05:13 2024 00:17:08.524 read: IOPS=25, BW=25.0MiB/s (26.2MB/s)(301MiB/12024msec) 00:17:08.524 slat (usec): min=543, max=2142.9k, avg=33443.42, stdev=213437.90 00:17:08.524 clat (msec): min=886, max=8446, avg=3072.51, stdev=2030.00 00:17:08.524 lat (msec): min=889, max=8485, avg=3105.96, stdev=2053.55 00:17:08.524 clat percentiles (msec): 00:17:08.524 | 1.00th=[ 894], 5.00th=[ 902], 10.00th=[ 911], 20.00th=[ 961], 00:17:08.524 | 30.00th=[ 978], 40.00th=[ 1045], 50.00th=[ 4212], 60.00th=[ 4530], 00:17:08.524 | 70.00th=[ 4665], 80.00th=[ 4933], 90.00th=[ 5134], 95.00th=[ 6477], 00:17:08.525 | 99.00th=[ 6544], 99.50th=[ 6544], 99.90th=[ 8423], 99.95th=[ 8423], 00:17:08.525 | 99.99th=[ 8423] 00:17:08.525 bw ( KiB/s): min= 7288, max=143360, per=2.66%, avg=88862.00, stdev=65086.64, samples=4 00:17:08.525 iops : min= 7, max= 140, avg=86.75, stdev=63.61, samples=4 00:17:08.525 lat (msec) : 1000=34.22%, 2000=11.30%, >=2000=54.49% 00:17:08.525 cpu : usr=0.01%, sys=0.87%, ctx=738, majf=0, minf=32769 00:17:08.525 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.7%, 16=5.3%, 32=10.6%, >=64=79.1% 00:17:08.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.525 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:17:08.525 issued rwts: total=301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.525 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.525 job2: (groupid=0, jobs=1): err= 0: pid=2357868: Mon Dec 9 18:05:13 2024 00:17:08.525 read: IOPS=126, BW=126MiB/s (132MB/s)(1276MiB/10105msec) 00:17:08.525 slat (usec): min=43, max=2119.3k, avg=7833.51, stdev=83606.30 00:17:08.525 clat (msec): min=101, max=5175, avg=973.58, stdev=1350.55 00:17:08.525 lat (msec): min=170, max=5176, avg=981.41, stdev=1357.84 00:17:08.525 clat percentiles (msec): 00:17:08.525 | 1.00th=[ 347], 5.00th=[ 376], 10.00th=[ 380], 20.00th=[ 380], 00:17:08.525 | 30.00th=[ 401], 40.00th=[ 481], 50.00th=[ 527], 60.00th=[ 625], 00:17:08.525 | 70.00th=[ 634], 80.00th=[ 684], 90.00th=[ 2735], 95.00th=[ 5067], 00:17:08.525 | 99.00th=[ 5134], 99.50th=[ 5134], 99.90th=[ 5201], 99.95th=[ 5201], 00:17:08.525 | 99.99th=[ 5201] 00:17:08.525 bw ( KiB/s): min= 8192, max=335872, per=5.82%, avg=194536.50, stdev=95813.19, samples=12 00:17:08.525 iops : min= 8, max= 328, avg=189.92, stdev=93.58, samples=12 00:17:08.525 lat (msec) : 250=0.47%, 500=41.69%, 750=44.44%, 1000=2.98%, >=2000=10.42% 00:17:08.525 cpu : usr=0.07%, sys=2.13%, ctx=1240, majf=0, minf=32769 00:17:08.525 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.1% 00:17:08.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.525 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.525 issued rwts: total=1276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.525 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.525 job2: (groupid=0, jobs=1): err= 0: pid=2357869: Mon Dec 9 18:05:13 2024 00:17:08.525 read: IOPS=11, BW=11.5MiB/s (12.1MB/s)(116MiB/10057msec) 00:17:08.525 slat (usec): min=517, max=2121.4k, avg=86325.18, stdev=383566.83 00:17:08.525 clat (msec): min=42, max=10050, avg=7785.76, stdev=3347.51 00:17:08.525 lat (msec): min=99, max=10056, avg=7872.09, stdev=3274.42 00:17:08.525 clat percentiles (msec): 00:17:08.525 | 1.00th=[ 100], 5.00th=[ 111], 10.00th=[ 155], 20.00th=[ 6678], 00:17:08.525 | 30.00th=[ 9060], 40.00th=[ 9194], 50.00th=[ 9329], 60.00th=[ 9463], 00:17:08.525 | 70.00th=[ 9597], 80.00th=[ 9731], 90.00th=[ 9866], 95.00th=[10000], 00:17:08.525 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:17:08.525 | 99.99th=[10000] 00:17:08.525 lat (msec) : 50=0.86%, 100=1.72%, 250=9.48%, 500=1.72%, >=2000=86.21% 00:17:08.525 cpu : usr=0.05%, sys=0.77%, ctx=257, majf=0, minf=29697 00:17:08.525 IO depths : 1=0.9%, 2=1.7%, 4=3.4%, 8=6.9%, 16=13.8%, 32=27.6%, >=64=45.7% 00:17:08.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.525 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:08.525 issued rwts: total=116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.525 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.525 job2: (groupid=0, jobs=1): err= 0: pid=2357870: Mon Dec 9 18:05:13 2024 00:17:08.525 read: IOPS=3, BW=3418KiB/s (3501kB/s)(40.0MiB/11982msec) 00:17:08.525 slat (usec): min=1324, max=2148.6k, avg=297427.96, stdev=697556.33 00:17:08.525 clat (msec): min=84, max=11979, avg=7979.83, stdev=4143.15 00:17:08.525 lat (msec): min=2112, max=11981, avg=8277.26, stdev=3985.88 00:17:08.525 clat percentiles (msec): 00:17:08.525 | 1.00th=[ 85], 5.00th=[ 2106], 10.00th=[ 2140], 20.00th=[ 2198], 00:17:08.525 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 8658], 60.00th=[11745], 00:17:08.525 | 70.00th=[11879], 80.00th=[11879], 90.00th=[12013], 95.00th=[12013], 00:17:08.525 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:08.525 | 99.99th=[12013] 00:17:08.525 lat (msec) : 100=2.50%, >=2000=97.50% 00:17:08.525 cpu : usr=0.00%, sys=0.38%, ctx=78, majf=0, minf=10241 00:17:08.525 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:17:08.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.525 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:08.525 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.525 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.525 job2: (groupid=0, jobs=1): err= 0: pid=2357871: Mon Dec 9 18:05:13 2024 00:17:08.525 read: IOPS=25, BW=25.2MiB/s (26.4MB/s)(254MiB/10087msec) 00:17:08.525 slat (usec): min=42, max=2184.0k, avg=39503.70, stdev=266011.01 00:17:08.525 clat (msec): min=51, max=9376, avg=4831.10, stdev=4206.57 00:17:08.525 lat (msec): min=173, max=9385, avg=4870.61, stdev=4202.86 00:17:08.525 clat percentiles (msec): 00:17:08.525 | 1.00th=[ 178], 5.00th=[ 531], 10.00th=[ 535], 20.00th=[ 542], 00:17:08.525 | 30.00th=[ 550], 40.00th=[ 558], 50.00th=[ 4597], 60.00th=[ 8926], 00:17:08.525 | 70.00th=[ 9060], 80.00th=[ 9194], 90.00th=[ 9329], 95.00th=[ 9329], 00:17:08.525 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:17:08.525 | 99.99th=[ 9329] 00:17:08.525 bw ( KiB/s): min= 2048, max=137216, per=1.47%, avg=49149.40, stdev=62658.98, samples=5 00:17:08.525 iops : min= 2, max= 134, avg=47.80, stdev=61.36, samples=5 00:17:08.525 lat (msec) : 100=0.39%, 250=1.97%, 500=0.39%, 750=44.49%, >=2000=52.76% 00:17:08.525 cpu : usr=0.01%, sys=1.35%, ctx=256, majf=0, minf=32769 00:17:08.525 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.3%, 32=12.6%, >=64=75.2% 00:17:08.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.525 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:17:08.525 issued rwts: total=254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.525 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.525 job2: (groupid=0, jobs=1): err= 0: pid=2357872: Mon Dec 9 18:05:13 2024 00:17:08.525 read: IOPS=2, BW=2837KiB/s (2905kB/s)(28.0MiB/10106msec) 00:17:08.525 slat (msec): min=2, max=2113, avg=358.51, stdev=752.45 00:17:08.525 clat (msec): min=67, max=10097, avg=5114.54, stdev=3854.62 00:17:08.525 lat (msec): min=125, max=10105, avg=5473.05, stdev=3834.58 00:17:08.525 clat percentiles (msec): 00:17:08.525 | 1.00th=[ 68], 5.00th=[ 126], 10.00th=[ 188], 20.00th=[ 215], 00:17:08.525 | 30.00th=[ 2366], 40.00th=[ 4530], 50.00th=[ 4530], 60.00th=[ 6611], 00:17:08.525 | 70.00th=[ 8792], 80.00th=[ 8926], 90.00th=[10134], 95.00th=[10134], 00:17:08.525 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:08.525 | 99.99th=[10134] 00:17:08.525 lat (msec) : 100=3.57%, 250=21.43%, >=2000=75.00% 00:17:08.525 cpu : usr=0.00%, sys=0.25%, ctx=85, majf=0, minf=7169 00:17:08.525 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:17:08.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.525 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:08.525 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.525 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.525 job2: (groupid=0, jobs=1): err= 0: pid=2357873: Mon Dec 9 18:05:13 2024 00:17:08.525 read: IOPS=5, BW=5243KiB/s (5368kB/s)(62.0MiB/12110msec) 00:17:08.525 slat (usec): min=937, max=2078.5k, avg=161382.58, stdev=525189.01 00:17:08.525 clat (msec): min=2103, max=12107, avg=9482.14, stdev=3723.44 00:17:08.525 lat (msec): min=2118, max=12109, avg=9643.52, stdev=3613.61 00:17:08.525 clat percentiles (msec): 00:17:08.525 | 1.00th=[ 2106], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:17:08.525 | 30.00th=[ 8557], 40.00th=[10805], 50.00th=[12013], 60.00th=[12013], 00:17:08.525 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:17:08.525 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:17:08.525 | 99.99th=[12147] 00:17:08.525 lat (msec) : >=2000=100.00% 00:17:08.525 cpu : usr=0.00%, sys=0.67%, ctx=107, majf=0, minf=15873 00:17:08.525 IO depths : 1=1.6%, 2=3.2%, 4=6.5%, 8=12.9%, 16=25.8%, 32=50.0%, >=64=0.0% 00:17:08.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.525 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:08.525 issued rwts: total=62,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.525 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.525 job2: (groupid=0, jobs=1): err= 0: pid=2357874: Mon Dec 9 18:05:13 2024 00:17:08.525 read: IOPS=8, BW=8534KiB/s (8739kB/s)(101MiB/12119msec) 00:17:08.525 slat (usec): min=837, max=2086.7k, avg=99101.06, stdev=416730.81 00:17:08.525 clat (msec): min=2108, max=12113, avg=9013.11, stdev=3632.83 00:17:08.525 lat (msec): min=2118, max=12118, avg=9112.21, stdev=3578.72 00:17:08.525 clat percentiles (msec): 00:17:08.525 | 1.00th=[ 2123], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:17:08.525 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[10805], 60.00th=[12013], 00:17:08.525 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:17:08.525 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:17:08.525 | 99.99th=[12147] 00:17:08.525 lat (msec) : >=2000=100.00% 00:17:08.525 cpu : usr=0.00%, sys=0.93%, ctx=101, majf=0, minf=25857 00:17:08.525 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=7.9%, 16=15.8%, 32=31.7%, >=64=37.6% 00:17:08.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.525 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:08.525 issued rwts: total=101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.525 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.525 job2: (groupid=0, jobs=1): err= 0: pid=2357875: Mon Dec 9 18:05:13 2024 00:17:08.525 read: IOPS=3, BW=4018KiB/s (4114kB/s)(47.0MiB/11979msec) 00:17:08.525 slat (usec): min=419, max=2150.0k, avg=253309.34, stdev=660563.48 00:17:08.525 clat (msec): min=72, max=11967, avg=5868.47, stdev=3443.82 00:17:08.525 lat (msec): min=2095, max=11978, avg=6121.78, stdev=3446.12 00:17:08.525 clat percentiles (msec): 00:17:08.525 | 1.00th=[ 73], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 2140], 00:17:08.525 | 30.00th=[ 4245], 40.00th=[ 4279], 50.00th=[ 6409], 60.00th=[ 6409], 00:17:08.525 | 70.00th=[ 6477], 80.00th=[ 8557], 90.00th=[11879], 95.00th=[11879], 00:17:08.525 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:08.525 | 99.99th=[12013] 00:17:08.525 lat (msec) : 100=2.13%, >=2000=97.87% 00:17:08.525 cpu : usr=0.00%, sys=0.36%, ctx=59, majf=0, minf=12033 00:17:08.525 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:17:08.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.526 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:08.526 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.526 job2: (groupid=0, jobs=1): err= 0: pid=2357876: Mon Dec 9 18:05:13 2024 00:17:08.526 read: IOPS=10, BW=10.6MiB/s (11.1MB/s)(107MiB/10087msec) 00:17:08.526 slat (usec): min=676, max=2099.2k, avg=93468.66, stdev=406129.56 00:17:08.526 clat (msec): min=85, max=10084, avg=5142.32, stdev=4044.69 00:17:08.526 lat (msec): min=87, max=10086, avg=5235.79, stdev=4042.28 00:17:08.526 clat percentiles (msec): 00:17:08.526 | 1.00th=[ 88], 5.00th=[ 104], 10.00th=[ 142], 20.00th=[ 215], 00:17:08.526 | 30.00th=[ 2333], 40.00th=[ 2366], 50.00th=[ 4530], 60.00th=[ 6678], 00:17:08.526 | 70.00th=[ 8792], 80.00th=[10000], 90.00th=[10134], 95.00th=[10134], 00:17:08.526 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:08.526 | 99.99th=[10134] 00:17:08.526 lat (msec) : 100=3.74%, 250=23.36%, 500=0.93%, >=2000=71.96% 00:17:08.526 cpu : usr=0.00%, sys=1.06%, ctx=98, majf=0, minf=27393 00:17:08.526 IO depths : 1=0.9%, 2=1.9%, 4=3.7%, 8=7.5%, 16=15.0%, 32=29.9%, >=64=41.1% 00:17:08.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.526 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:08.526 issued rwts: total=107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.526 job2: (groupid=0, jobs=1): err= 0: pid=2357877: Mon Dec 9 18:05:13 2024 00:17:08.526 read: IOPS=45, BW=45.6MiB/s (47.8MB/s)(461MiB/10113msec) 00:17:08.526 slat (usec): min=44, max=2241.5k, avg=21822.17, stdev=161573.23 00:17:08.526 clat (msec): min=50, max=7188, avg=2576.89, stdev=2377.30 00:17:08.526 lat (msec): min=174, max=7190, avg=2598.71, stdev=2381.21 00:17:08.526 clat percentiles (msec): 00:17:08.526 | 1.00th=[ 188], 5.00th=[ 860], 10.00th=[ 911], 20.00th=[ 961], 00:17:08.526 | 30.00th=[ 986], 40.00th=[ 1011], 50.00th=[ 1045], 60.00th=[ 1099], 00:17:08.526 | 70.00th=[ 3205], 80.00th=[ 4933], 90.00th=[ 6946], 95.00th=[ 7080], 00:17:08.526 | 99.00th=[ 7148], 99.50th=[ 7148], 99.90th=[ 7215], 99.95th=[ 7215], 00:17:08.526 | 99.99th=[ 7215] 00:17:08.526 bw ( KiB/s): min=18432, max=145408, per=2.55%, avg=85153.38, stdev=48938.46, samples=8 00:17:08.526 iops : min= 18, max= 142, avg=83.00, stdev=47.81, samples=8 00:17:08.526 lat (msec) : 100=0.22%, 250=0.87%, 500=1.95%, 1000=32.54%, 2000=27.77% 00:17:08.526 lat (msec) : >=2000=36.66% 00:17:08.526 cpu : usr=0.02%, sys=1.12%, ctx=838, majf=0, minf=32769 00:17:08.526 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=6.9%, >=64=86.3% 00:17:08.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.526 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:08.526 issued rwts: total=461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.526 job2: (groupid=0, jobs=1): err= 0: pid=2357879: Mon Dec 9 18:05:13 2024 00:17:08.526 read: IOPS=3, BW=3672KiB/s (3760kB/s)(36.0MiB/10040msec) 00:17:08.526 slat (usec): min=376, max=2114.1k, avg=278426.76, stdev=670533.20 00:17:08.526 clat (msec): min=16, max=9958, avg=2992.25, stdev=3619.36 00:17:08.526 lat (msec): min=52, max=10039, avg=3270.68, stdev=3766.43 00:17:08.526 clat percentiles (msec): 00:17:08.526 | 1.00th=[ 17], 5.00th=[ 53], 10.00th=[ 73], 20.00th=[ 75], 00:17:08.526 | 30.00th=[ 79], 40.00th=[ 82], 50.00th=[ 2299], 60.00th=[ 2333], 00:17:08.526 | 70.00th=[ 4530], 80.00th=[ 6678], 90.00th=[ 8792], 95.00th=[10000], 00:17:08.526 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:17:08.526 | 99.99th=[10000] 00:17:08.526 lat (msec) : 20=2.78%, 100=38.89%, 250=2.78%, 500=2.78%, >=2000=52.78% 00:17:08.526 cpu : usr=0.00%, sys=0.22%, ctx=79, majf=0, minf=9217 00:17:08.526 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:17:08.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.526 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:08.526 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.526 job2: (groupid=0, jobs=1): err= 0: pid=2357880: Mon Dec 9 18:05:13 2024 00:17:08.526 read: IOPS=4, BW=4261KiB/s (4363kB/s)(42.0MiB/10094msec) 00:17:08.526 slat (usec): min=997, max=2152.2k, avg=238097.47, stdev=637428.39 00:17:08.526 clat (msec): min=93, max=10088, avg=7349.54, stdev=3864.39 00:17:08.526 lat (msec): min=99, max=10093, avg=7587.64, stdev=3711.44 00:17:08.526 clat percentiles (msec): 00:17:08.526 | 1.00th=[ 93], 5.00th=[ 108], 10.00th=[ 128], 20.00th=[ 2366], 00:17:08.526 | 30.00th=[ 6678], 40.00th=[ 8792], 50.00th=[10000], 60.00th=[10000], 00:17:08.526 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:17:08.526 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:08.526 | 99.99th=[10134] 00:17:08.526 lat (msec) : 100=4.76%, 250=11.90%, >=2000=83.33% 00:17:08.526 cpu : usr=0.00%, sys=0.38%, ctx=97, majf=0, minf=10753 00:17:08.526 IO depths : 1=2.4%, 2=4.8%, 4=9.5%, 8=19.0%, 16=38.1%, 32=26.2%, >=64=0.0% 00:17:08.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.526 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:08.526 issued rwts: total=42,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.526 job2: (groupid=0, jobs=1): err= 0: pid=2357881: Mon Dec 9 18:05:13 2024 00:17:08.526 read: IOPS=39, BW=39.1MiB/s (41.0MB/s)(397MiB/10143msec) 00:17:08.526 slat (usec): min=43, max=2082.2k, avg=25435.68, stdev=175840.77 00:17:08.526 clat (msec): min=43, max=8684, avg=2832.50, stdev=2451.88 00:17:08.526 lat (msec): min=173, max=9581, avg=2857.93, stdev=2458.42 00:17:08.526 clat percentiles (msec): 00:17:08.526 | 1.00th=[ 180], 5.00th=[ 701], 10.00th=[ 827], 20.00th=[ 969], 00:17:08.526 | 30.00th=[ 1011], 40.00th=[ 1099], 50.00th=[ 1183], 60.00th=[ 2089], 00:17:08.526 | 70.00th=[ 4329], 80.00th=[ 6544], 90.00th=[ 6611], 95.00th=[ 6611], 00:17:08.526 | 99.00th=[ 6745], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:17:08.526 | 99.99th=[ 8658] 00:17:08.526 bw ( KiB/s): min= 6144, max=165888, per=2.35%, avg=78436.43, stdev=58711.35, samples=7 00:17:08.526 iops : min= 6, max= 162, avg=76.29, stdev=57.45, samples=7 00:17:08.526 lat (msec) : 50=0.25%, 250=1.26%, 500=1.51%, 750=3.53%, 1000=21.91% 00:17:08.526 lat (msec) : 2000=28.46%, >=2000=43.07% 00:17:08.526 cpu : usr=0.01%, sys=0.94%, ctx=769, majf=0, minf=32769 00:17:08.526 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.1%, >=64=84.1% 00:17:08.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.526 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:08.526 issued rwts: total=397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.526 job3: (groupid=0, jobs=1): err= 0: pid=2357887: Mon Dec 9 18:05:13 2024 00:17:08.526 read: IOPS=8, BW=8910KiB/s (9124kB/s)(105MiB/12067msec) 00:17:08.526 slat (usec): min=605, max=2161.9k, avg=114240.86, stdev=450794.89 00:17:08.526 clat (msec): min=70, max=12064, avg=11054.96, stdev=2068.14 00:17:08.526 lat (msec): min=2099, max=12066, avg=11169.20, stdev=1764.58 00:17:08.526 clat percentiles (msec): 00:17:08.526 | 1.00th=[ 2106], 5.00th=[ 6409], 10.00th=[ 8557], 20.00th=[11208], 00:17:08.526 | 30.00th=[11342], 40.00th=[11476], 50.00th=[11610], 60.00th=[11745], 00:17:08.526 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:17:08.526 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:08.526 | 99.99th=[12013] 00:17:08.526 lat (msec) : 100=0.95%, >=2000=99.05% 00:17:08.526 cpu : usr=0.00%, sys=0.80%, ctx=262, majf=0, minf=26881 00:17:08.526 IO depths : 1=1.0%, 2=1.9%, 4=3.8%, 8=7.6%, 16=15.2%, 32=30.5%, >=64=40.0% 00:17:08.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.526 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:08.526 issued rwts: total=105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.526 job3: (groupid=0, jobs=1): err= 0: pid=2357888: Mon Dec 9 18:05:13 2024 00:17:08.526 read: IOPS=16, BW=16.7MiB/s (17.5MB/s)(202MiB/12082msec) 00:17:08.526 slat (usec): min=577, max=2165.9k, avg=59443.70, stdev=325329.44 00:17:08.526 clat (msec): min=72, max=11489, avg=7315.22, stdev=4593.07 00:17:08.526 lat (msec): min=898, max=11491, avg=7374.67, stdev=4570.43 00:17:08.526 clat percentiles (msec): 00:17:08.526 | 1.00th=[ 894], 5.00th=[ 927], 10.00th=[ 961], 20.00th=[ 1003], 00:17:08.526 | 30.00th=[ 2140], 40.00th=[ 7282], 50.00th=[10805], 60.00th=[10939], 00:17:08.526 | 70.00th=[11073], 80.00th=[11208], 90.00th=[11342], 95.00th=[11476], 00:17:08.526 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:17:08.526 | 99.99th=[11476] 00:17:08.526 bw ( KiB/s): min= 4096, max=88064, per=0.65%, avg=21655.43, stdev=30255.91, samples=7 00:17:08.526 iops : min= 4, max= 86, avg=21.14, stdev=29.55, samples=7 00:17:08.526 lat (msec) : 100=0.50%, 1000=19.31%, 2000=8.42%, >=2000=71.78% 00:17:08.526 cpu : usr=0.00%, sys=0.99%, ctx=387, majf=0, minf=32769 00:17:08.527 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=7.9%, 32=15.8%, >=64=68.8% 00:17:08.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.527 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:17:08.527 issued rwts: total=202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.527 job3: (groupid=0, jobs=1): err= 0: pid=2357889: Mon Dec 9 18:05:13 2024 00:17:08.527 read: IOPS=18, BW=18.6MiB/s (19.5MB/s)(222MiB/11909msec) 00:17:08.527 slat (usec): min=517, max=2160.1k, avg=45168.74, stdev=281178.76 00:17:08.527 clat (msec): min=583, max=11367, avg=6589.17, stdev=4764.30 00:17:08.527 lat (msec): min=584, max=11369, avg=6634.34, stdev=4762.51 00:17:08.527 clat percentiles (msec): 00:17:08.527 | 1.00th=[ 592], 5.00th=[ 634], 10.00th=[ 667], 20.00th=[ 718], 00:17:08.527 | 30.00th=[ 776], 40.00th=[ 2970], 50.00th=[ 9463], 60.00th=[10805], 00:17:08.527 | 70.00th=[10939], 80.00th=[11073], 90.00th=[11208], 95.00th=[11342], 00:17:08.527 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:17:08.527 | 99.99th=[11342] 00:17:08.527 bw ( KiB/s): min= 2048, max=90112, per=0.81%, avg=26973.57, stdev=33259.72, samples=7 00:17:08.527 iops : min= 2, max= 88, avg=26.29, stdev=32.48, samples=7 00:17:08.527 lat (msec) : 750=25.68%, 1000=5.86%, 2000=4.50%, >=2000=63.96% 00:17:08.527 cpu : usr=0.03%, sys=0.92%, ctx=354, majf=0, minf=32769 00:17:08.527 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.2%, 32=14.4%, >=64=71.6% 00:17:08.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.527 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:17:08.527 issued rwts: total=222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.527 job3: (groupid=0, jobs=1): err= 0: pid=2357890: Mon Dec 9 18:05:13 2024 00:17:08.527 read: IOPS=8, BW=8244KiB/s (8442kB/s)(81.0MiB/10061msec) 00:17:08.527 slat (usec): min=475, max=2077.8k, avg=123491.57, stdev=461923.36 00:17:08.527 clat (msec): min=57, max=10059, avg=4220.90, stdev=3469.91 00:17:08.527 lat (msec): min=62, max=10060, avg=4344.39, stdev=3497.78 00:17:08.527 clat percentiles (msec): 00:17:08.527 | 1.00th=[ 58], 5.00th=[ 117], 10.00th=[ 133], 20.00th=[ 243], 00:17:08.527 | 30.00th=[ 2366], 40.00th=[ 2400], 50.00th=[ 4530], 60.00th=[ 4597], 00:17:08.527 | 70.00th=[ 6678], 80.00th=[ 8792], 90.00th=[ 8926], 95.00th=[10000], 00:17:08.527 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:17:08.527 | 99.99th=[10000] 00:17:08.527 lat (msec) : 100=2.47%, 250=19.75%, 500=6.17%, >=2000=71.60% 00:17:08.527 cpu : usr=0.00%, sys=0.73%, ctx=84, majf=0, minf=20737 00:17:08.527 IO depths : 1=1.2%, 2=2.5%, 4=4.9%, 8=9.9%, 16=19.8%, 32=39.5%, >=64=22.2% 00:17:08.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.527 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:08.527 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.527 job3: (groupid=0, jobs=1): err= 0: pid=2357891: Mon Dec 9 18:05:13 2024 00:17:08.527 read: IOPS=11, BW=11.1MiB/s (11.6MB/s)(133MiB/12029msec) 00:17:08.527 slat (usec): min=542, max=2189.2k, avg=90017.41, stdev=403243.08 00:17:08.527 clat (msec): min=55, max=12012, avg=10726.26, stdev=1834.90 00:17:08.527 lat (msec): min=2133, max=12013, avg=10816.28, stdev=1583.57 00:17:08.527 clat percentiles (msec): 00:17:08.527 | 1.00th=[ 2140], 5.00th=[ 6342], 10.00th=[10805], 20.00th=[10805], 00:17:08.527 | 30.00th=[10939], 40.00th=[10939], 50.00th=[11073], 60.00th=[11208], 00:17:08.527 | 70.00th=[11476], 80.00th=[11610], 90.00th=[11879], 95.00th=[12013], 00:17:08.527 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:08.527 | 99.99th=[12013] 00:17:08.527 bw ( KiB/s): min= 1809, max= 4087, per=0.07%, avg=2498.00, stdev=1065.31, samples=4 00:17:08.527 iops : min= 1, max= 3, avg= 2.00, stdev= 0.82, samples=4 00:17:08.527 lat (msec) : 100=0.75%, >=2000=99.25% 00:17:08.527 cpu : usr=0.00%, sys=0.82%, ctx=315, majf=0, minf=32769 00:17:08.527 IO depths : 1=0.8%, 2=1.5%, 4=3.0%, 8=6.0%, 16=12.0%, 32=24.1%, >=64=52.6% 00:17:08.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.527 complete : 0=0.0%, 4=85.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=14.3% 00:17:08.527 issued rwts: total=133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.527 job3: (groupid=0, jobs=1): err= 0: pid=2357892: Mon Dec 9 18:05:13 2024 00:17:08.527 read: IOPS=59, BW=60.0MiB/s (62.9MB/s)(604MiB/10070msec) 00:17:08.527 slat (usec): min=56, max=2092.1k, avg=16605.84, stdev=143006.78 00:17:08.527 clat (msec): min=33, max=5983, avg=1330.56, stdev=1355.72 00:17:08.527 lat (msec): min=101, max=5989, avg=1347.16, stdev=1367.26 00:17:08.527 clat percentiles (msec): 00:17:08.527 | 1.00th=[ 142], 5.00th=[ 600], 10.00th=[ 625], 20.00th=[ 634], 00:17:08.527 | 30.00th=[ 642], 40.00th=[ 642], 50.00th=[ 642], 60.00th=[ 651], 00:17:08.527 | 70.00th=[ 701], 80.00th=[ 2165], 90.00th=[ 2333], 95.00th=[ 5873], 00:17:08.527 | 99.00th=[ 5940], 99.50th=[ 6007], 99.90th=[ 6007], 99.95th=[ 6007], 00:17:08.527 | 99.99th=[ 6007] 00:17:08.527 bw ( KiB/s): min=53248, max=215040, per=4.70%, avg=157013.33, stdev=74460.13, samples=6 00:17:08.527 iops : min= 52, max= 210, avg=153.33, stdev=72.71, samples=6 00:17:08.527 lat (msec) : 50=0.17%, 250=1.99%, 500=0.66%, 750=67.38%, 2000=2.98% 00:17:08.527 lat (msec) : >=2000=26.82% 00:17:08.527 cpu : usr=0.06%, sys=1.88%, ctx=530, majf=0, minf=32769 00:17:08.527 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:17:08.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.527 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:08.527 issued rwts: total=604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.527 job3: (groupid=0, jobs=1): err= 0: pid=2357893: Mon Dec 9 18:05:13 2024 00:17:08.527 read: IOPS=8, BW=8927KiB/s (9141kB/s)(105MiB/12045msec) 00:17:08.527 slat (usec): min=573, max=2189.1k, avg=114007.25, stdev=447964.81 00:17:08.527 clat (msec): min=73, max=12042, avg=10700.64, stdev=2127.84 00:17:08.527 lat (msec): min=2112, max=12044, avg=10814.65, stdev=1856.33 00:17:08.527 clat percentiles (msec): 00:17:08.527 | 1.00th=[ 2106], 5.00th=[ 6409], 10.00th=[ 8557], 20.00th=[10939], 00:17:08.527 | 30.00th=[11073], 40.00th=[11073], 50.00th=[11208], 60.00th=[11342], 00:17:08.527 | 70.00th=[11476], 80.00th=[11610], 90.00th=[12013], 95.00th=[12013], 00:17:08.527 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:08.527 | 99.99th=[12013] 00:17:08.527 lat (msec) : 100=0.95%, >=2000=99.05% 00:17:08.527 cpu : usr=0.00%, sys=0.68%, ctx=329, majf=0, minf=26881 00:17:08.527 IO depths : 1=1.0%, 2=1.9%, 4=3.8%, 8=7.6%, 16=15.2%, 32=30.5%, >=64=40.0% 00:17:08.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.527 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:08.527 issued rwts: total=105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.527 job3: (groupid=0, jobs=1): err= 0: pid=2357894: Mon Dec 9 18:05:13 2024 00:17:08.527 read: IOPS=55, BW=55.9MiB/s (58.6MB/s)(564MiB/10098msec) 00:17:08.527 slat (usec): min=44, max=2123.7k, avg=17749.73, stdev=133740.82 00:17:08.527 clat (msec): min=82, max=8187, avg=898.00, stdev=1287.17 00:17:08.527 lat (msec): min=180, max=8207, avg=915.75, stdev=1324.72 00:17:08.527 clat percentiles (msec): 00:17:08.527 | 1.00th=[ 271], 5.00th=[ 338], 10.00th=[ 380], 20.00th=[ 384], 00:17:08.527 | 30.00th=[ 384], 40.00th=[ 388], 50.00th=[ 393], 60.00th=[ 430], 00:17:08.527 | 70.00th=[ 567], 80.00th=[ 1234], 90.00th=[ 2072], 95.00th=[ 2668], 00:17:08.527 | 99.00th=[ 8154], 99.50th=[ 8154], 99.90th=[ 8221], 99.95th=[ 8221], 00:17:08.527 | 99.99th=[ 8221] 00:17:08.527 bw ( KiB/s): min=49152, max=339968, per=6.59%, avg=220288.50, stdev=125982.72, samples=4 00:17:08.527 iops : min= 48, max= 332, avg=215.00, stdev=123.04, samples=4 00:17:08.527 lat (msec) : 100=0.18%, 250=0.53%, 500=64.54%, 750=11.70%, 1000=0.89% 00:17:08.527 lat (msec) : 2000=11.52%, >=2000=10.64% 00:17:08.527 cpu : usr=0.02%, sys=1.39%, ctx=1092, majf=0, minf=32769 00:17:08.527 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.7%, >=64=88.8% 00:17:08.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.527 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:08.527 issued rwts: total=564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.527 job3: (groupid=0, jobs=1): err= 0: pid=2357895: Mon Dec 9 18:05:13 2024 00:17:08.527 read: IOPS=18, BW=18.2MiB/s (19.1MB/s)(220MiB/12091msec) 00:17:08.527 slat (usec): min=938, max=4233.1k, avg=54628.69, stdev=341404.55 00:17:08.527 clat (msec): min=70, max=7113, avg=4579.57, stdev=1696.27 00:17:08.527 lat (msec): min=1710, max=7139, avg=4634.20, stdev=1664.45 00:17:08.527 clat percentiles (msec): 00:17:08.527 | 1.00th=[ 1703], 5.00th=[ 2072], 10.00th=[ 2106], 20.00th=[ 2165], 00:17:08.527 | 30.00th=[ 3943], 40.00th=[ 4144], 50.00th=[ 4933], 60.00th=[ 5201], 00:17:08.527 | 70.00th=[ 5470], 80.00th=[ 5805], 90.00th=[ 6946], 95.00th=[ 7013], 00:17:08.527 | 99.00th=[ 7080], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080], 00:17:08.527 | 99.99th=[ 7080] 00:17:08.527 bw ( KiB/s): min= 4096, max=57344, per=1.13%, avg=37654.40, stdev=21991.75, samples=5 00:17:08.527 iops : min= 4, max= 56, avg=36.40, stdev=21.52, samples=5 00:17:08.527 lat (msec) : 100=0.45%, 2000=2.73%, >=2000=96.82% 00:17:08.527 cpu : usr=0.00%, sys=1.11%, ctx=553, majf=0, minf=32769 00:17:08.527 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.3%, 32=14.5%, >=64=71.4% 00:17:08.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.527 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:17:08.527 issued rwts: total=220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.527 job3: (groupid=0, jobs=1): err= 0: pid=2357896: Mon Dec 9 18:05:13 2024 00:17:08.527 read: IOPS=123, BW=124MiB/s (130MB/s)(1477MiB/11956msec) 00:17:08.527 slat (usec): min=41, max=2092.3k, avg=8037.74, stdev=80741.72 00:17:08.527 clat (msec): min=76, max=5957, avg=908.79, stdev=792.63 00:17:08.527 lat (msec): min=391, max=5989, avg=916.83, stdev=798.83 00:17:08.527 clat percentiles (msec): 00:17:08.527 | 1.00th=[ 393], 5.00th=[ 409], 10.00th=[ 451], 20.00th=[ 506], 00:17:08.527 | 30.00th=[ 510], 40.00th=[ 518], 50.00th=[ 535], 60.00th=[ 651], 00:17:08.527 | 70.00th=[ 667], 80.00th=[ 693], 90.00th=[ 2601], 95.00th=[ 2735], 00:17:08.527 | 99.00th=[ 2802], 99.50th=[ 2802], 99.90th=[ 4866], 99.95th=[ 5940], 00:17:08.527 | 99.99th=[ 5940] 00:17:08.527 bw ( KiB/s): min=10240, max=294323, per=5.85%, avg=195362.36, stdev=75631.86, samples=14 00:17:08.528 iops : min= 10, max= 287, avg=190.71, stdev=73.78, samples=14 00:17:08.528 lat (msec) : 100=0.07%, 500=16.52%, 750=65.81%, 2000=0.41%, >=2000=17.20% 00:17:08.528 cpu : usr=0.08%, sys=1.84%, ctx=1332, majf=0, minf=32769 00:17:08.528 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:17:08.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.528 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.528 issued rwts: total=1477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.528 job3: (groupid=0, jobs=1): err= 0: pid=2357897: Mon Dec 9 18:05:13 2024 00:17:08.528 read: IOPS=27, BW=27.6MiB/s (28.9MB/s)(330MiB/11975msec) 00:17:08.528 slat (usec): min=69, max=2099.2k, avg=36068.26, stdev=212835.80 00:17:08.528 clat (msec): min=70, max=6938, avg=2465.10, stdev=1436.24 00:17:08.528 lat (msec): min=949, max=6951, avg=2501.17, stdev=1450.40 00:17:08.528 clat percentiles (msec): 00:17:08.528 | 1.00th=[ 953], 5.00th=[ 995], 10.00th=[ 1070], 20.00th=[ 1267], 00:17:08.528 | 30.00th=[ 1401], 40.00th=[ 1469], 50.00th=[ 1552], 60.00th=[ 3440], 00:17:08.528 | 70.00th=[ 3574], 80.00th=[ 3742], 90.00th=[ 4010], 95.00th=[ 4178], 00:17:08.528 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:17:08.528 | 99.99th=[ 6946] 00:17:08.528 bw ( KiB/s): min= 6460, max=172032, per=2.05%, avg=68660.67, stdev=64595.54, samples=6 00:17:08.528 iops : min= 6, max= 168, avg=67.00, stdev=63.14, samples=6 00:17:08.528 lat (msec) : 100=0.30%, 1000=5.45%, 2000=49.39%, >=2000=44.85% 00:17:08.528 cpu : usr=0.01%, sys=0.84%, ctx=731, majf=0, minf=32769 00:17:08.528 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.7%, >=64=80.9% 00:17:08.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.528 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:17:08.528 issued rwts: total=330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.528 job3: (groupid=0, jobs=1): err= 0: pid=2357898: Mon Dec 9 18:05:13 2024 00:17:08.528 read: IOPS=84, BW=84.1MiB/s (88.2MB/s)(1009MiB/11992msec) 00:17:08.528 slat (usec): min=43, max=2067.2k, avg=9904.87, stdev=78482.92 00:17:08.528 clat (msec): min=529, max=6798, avg=1434.73, stdev=1785.62 00:17:08.528 lat (msec): min=530, max=6801, avg=1444.64, stdev=1791.76 00:17:08.528 clat percentiles (msec): 00:17:08.528 | 1.00th=[ 531], 5.00th=[ 535], 10.00th=[ 535], 20.00th=[ 542], 00:17:08.528 | 30.00th=[ 584], 40.00th=[ 735], 50.00th=[ 760], 60.00th=[ 802], 00:17:08.528 | 70.00th=[ 860], 80.00th=[ 902], 90.00th=[ 4144], 95.00th=[ 6611], 00:17:08.528 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:17:08.528 | 99.99th=[ 6812] 00:17:08.528 bw ( KiB/s): min= 7787, max=247808, per=4.16%, avg=138877.08, stdev=86187.27, samples=13 00:17:08.528 iops : min= 7, max= 242, avg=135.46, stdev=84.29, samples=13 00:17:08.528 lat (msec) : 750=46.28%, 1000=36.97%, 2000=0.59%, >=2000=16.15% 00:17:08.528 cpu : usr=0.06%, sys=1.43%, ctx=1513, majf=0, minf=32769 00:17:08.528 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:17:08.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.528 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.528 issued rwts: total=1009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.528 job3: (groupid=0, jobs=1): err= 0: pid=2357899: Mon Dec 9 18:05:13 2024 00:17:08.528 read: IOPS=41, BW=41.1MiB/s (43.1MB/s)(417MiB/10150msec) 00:17:08.528 slat (usec): min=603, max=2154.6k, avg=24133.79, stdev=169866.49 00:17:08.528 clat (msec): min=82, max=7323, avg=2903.62, stdev=2603.51 00:17:08.528 lat (msec): min=166, max=7326, avg=2927.75, stdev=2604.56 00:17:08.528 clat percentiles (msec): 00:17:08.528 | 1.00th=[ 911], 5.00th=[ 936], 10.00th=[ 961], 20.00th=[ 995], 00:17:08.528 | 30.00th=[ 1036], 40.00th=[ 1250], 50.00th=[ 1385], 60.00th=[ 1519], 00:17:08.528 | 70.00th=[ 4463], 80.00th=[ 6745], 90.00th=[ 7013], 95.00th=[ 7148], 00:17:08.528 | 99.00th=[ 7282], 99.50th=[ 7282], 99.90th=[ 7349], 99.95th=[ 7349], 00:17:08.528 | 99.99th=[ 7349] 00:17:08.528 bw ( KiB/s): min= 2052, max=139264, per=1.97%, avg=65712.56, stdev=59888.88, samples=9 00:17:08.528 iops : min= 2, max= 136, avg=64.00, stdev=58.33, samples=9 00:17:08.528 lat (msec) : 100=0.24%, 250=0.72%, 1000=21.82%, 2000=45.56%, >=2000=31.65% 00:17:08.528 cpu : usr=0.03%, sys=1.53%, ctx=978, majf=0, minf=32769 00:17:08.528 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.7%, >=64=84.9% 00:17:08.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.528 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:08.528 issued rwts: total=417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.528 job4: (groupid=0, jobs=1): err= 0: pid=2357909: Mon Dec 9 18:05:13 2024 00:17:08.528 read: IOPS=39, BW=39.4MiB/s (41.3MB/s)(468MiB/11875msec) 00:17:08.528 slat (usec): min=45, max=2143.2k, avg=21361.51, stdev=173705.38 00:17:08.528 clat (msec): min=628, max=6023, avg=1863.60, stdev=1706.29 00:17:08.528 lat (msec): min=631, max=6056, avg=1884.96, stdev=1719.81 00:17:08.528 clat percentiles (msec): 00:17:08.528 | 1.00th=[ 634], 5.00th=[ 634], 10.00th=[ 634], 20.00th=[ 634], 00:17:08.528 | 30.00th=[ 642], 40.00th=[ 642], 50.00th=[ 642], 60.00th=[ 659], 00:17:08.528 | 70.00th=[ 2802], 80.00th=[ 4329], 90.00th=[ 4597], 95.00th=[ 4665], 00:17:08.528 | 99.00th=[ 6007], 99.50th=[ 6007], 99.90th=[ 6007], 99.95th=[ 6007], 00:17:08.528 | 99.99th=[ 6007] 00:17:08.528 bw ( KiB/s): min=47104, max=206848, per=4.11%, avg=137292.40, stdev=79968.65, samples=5 00:17:08.528 iops : min= 46, max= 202, avg=134.00, stdev=78.19, samples=5 00:17:08.528 lat (msec) : 750=61.32%, 2000=3.85%, >=2000=34.83% 00:17:08.528 cpu : usr=0.03%, sys=1.37%, ctx=452, majf=0, minf=32769 00:17:08.528 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.5% 00:17:08.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.528 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:08.528 issued rwts: total=468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.528 job4: (groupid=0, jobs=1): err= 0: pid=2357910: Mon Dec 9 18:05:13 2024 00:17:08.528 read: IOPS=7, BW=7455KiB/s (7634kB/s)(74.0MiB/10164msec) 00:17:08.528 slat (usec): min=682, max=2093.3k, avg=135949.01, stdev=485656.64 00:17:08.528 clat (msec): min=102, max=10160, avg=7578.68, stdev=3666.38 00:17:08.528 lat (msec): min=175, max=10163, avg=7714.63, stdev=3570.64 00:17:08.528 clat percentiles (msec): 00:17:08.528 | 1.00th=[ 104], 5.00th=[ 192], 10.00th=[ 226], 20.00th=[ 4463], 00:17:08.528 | 30.00th=[ 6678], 40.00th=[ 8792], 50.00th=[10000], 60.00th=[10000], 00:17:08.528 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:17:08.528 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:08.528 | 99.99th=[10134] 00:17:08.528 lat (msec) : 250=14.86%, >=2000=85.14% 00:17:08.528 cpu : usr=0.01%, sys=0.79%, ctx=137, majf=0, minf=18945 00:17:08.528 IO depths : 1=1.4%, 2=2.7%, 4=5.4%, 8=10.8%, 16=21.6%, 32=43.2%, >=64=14.9% 00:17:08.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.528 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:08.528 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.528 job4: (groupid=0, jobs=1): err= 0: pid=2357911: Mon Dec 9 18:05:13 2024 00:17:08.528 read: IOPS=49, BW=49.5MiB/s (51.9MB/s)(597MiB/12064msec) 00:17:08.528 slat (usec): min=48, max=2058.5k, avg=20074.88, stdev=154496.01 00:17:08.528 clat (msec): min=76, max=6313, avg=1903.43, stdev=1691.41 00:17:08.528 lat (msec): min=519, max=6319, avg=1923.50, stdev=1702.34 00:17:08.528 clat percentiles (msec): 00:17:08.528 | 1.00th=[ 518], 5.00th=[ 523], 10.00th=[ 527], 20.00th=[ 584], 00:17:08.528 | 30.00th=[ 684], 40.00th=[ 885], 50.00th=[ 969], 60.00th=[ 1036], 00:17:08.528 | 70.00th=[ 3306], 80.00th=[ 3574], 90.00th=[ 4866], 95.00th=[ 5000], 00:17:08.528 | 99.00th=[ 6074], 99.50th=[ 6141], 99.90th=[ 6342], 99.95th=[ 6342], 00:17:08.528 | 99.99th=[ 6342] 00:17:08.528 bw ( KiB/s): min= 6144, max=247808, per=4.11%, avg=137216.00, stdev=73718.52, samples=7 00:17:08.528 iops : min= 6, max= 242, avg=134.00, stdev=71.99, samples=7 00:17:08.528 lat (msec) : 100=0.17%, 750=32.66%, 1000=19.77%, 2000=13.57%, >=2000=33.84% 00:17:08.528 cpu : usr=0.01%, sys=1.01%, ctx=856, majf=0, minf=32769 00:17:08.528 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.4%, >=64=89.4% 00:17:08.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.528 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:08.528 issued rwts: total=597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.528 job4: (groupid=0, jobs=1): err= 0: pid=2357912: Mon Dec 9 18:05:13 2024 00:17:08.528 read: IOPS=4, BW=5066KiB/s (5187kB/s)(59.0MiB/11926msec) 00:17:08.528 slat (usec): min=783, max=2083.8k, avg=201042.07, stdev=585994.85 00:17:08.528 clat (msec): min=64, max=11921, avg=9203.49, stdev=2900.91 00:17:08.528 lat (msec): min=2055, max=11925, avg=9404.54, stdev=2657.40 00:17:08.528 clat percentiles (msec): 00:17:08.528 | 1.00th=[ 65], 5.00th=[ 2165], 10.00th=[ 4329], 20.00th=[ 6477], 00:17:08.528 | 30.00th=[ 8557], 40.00th=[ 8658], 50.00th=[10671], 60.00th=[10671], 00:17:08.528 | 70.00th=[11745], 80.00th=[11879], 90.00th=[11879], 95.00th=[11879], 00:17:08.528 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:17:08.528 | 99.99th=[11879] 00:17:08.528 lat (msec) : 100=1.69%, >=2000=98.31% 00:17:08.528 cpu : usr=0.00%, sys=0.46%, ctx=71, majf=0, minf=15105 00:17:08.528 IO depths : 1=1.7%, 2=3.4%, 4=6.8%, 8=13.6%, 16=27.1%, 32=47.5%, >=64=0.0% 00:17:08.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.528 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:08.528 issued rwts: total=59,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.528 job4: (groupid=0, jobs=1): err= 0: pid=2357913: Mon Dec 9 18:05:13 2024 00:17:08.528 read: IOPS=135, BW=135MiB/s (142MB/s)(1622MiB/11993msec) 00:17:08.528 slat (usec): min=37, max=2080.4k, avg=7332.39, stdev=92278.20 00:17:08.528 clat (msec): min=92, max=5762, avg=540.47, stdev=706.42 00:17:08.528 lat (msec): min=190, max=5771, avg=547.80, stdev=719.55 00:17:08.528 clat percentiles (msec): 00:17:08.528 | 1.00th=[ 190], 5.00th=[ 199], 10.00th=[ 203], 20.00th=[ 222], 00:17:08.528 | 30.00th=[ 236], 40.00th=[ 253], 50.00th=[ 271], 60.00th=[ 368], 00:17:08.528 | 70.00th=[ 485], 80.00th=[ 558], 90.00th=[ 667], 95.00th=[ 2400], 00:17:08.528 | 99.00th=[ 2567], 99.50th=[ 4597], 99.90th=[ 5738], 99.95th=[ 5738], 00:17:08.528 | 99.99th=[ 5738] 00:17:08.529 bw ( KiB/s): min=148593, max=538624, per=10.04%, avg=335481.89, stdev=151775.89, samples=9 00:17:08.529 iops : min= 145, max= 526, avg=327.56, stdev=148.26, samples=9 00:17:08.529 lat (msec) : 100=0.06%, 250=39.33%, 500=32.06%, 750=19.61%, >=2000=8.94% 00:17:08.529 cpu : usr=0.07%, sys=1.44%, ctx=2726, majf=0, minf=32769 00:17:08.529 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:17:08.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.529 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.529 issued rwts: total=1622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.529 job4: (groupid=0, jobs=1): err= 0: pid=2357914: Mon Dec 9 18:05:13 2024 00:17:08.529 read: IOPS=188, BW=189MiB/s (198MB/s)(2267MiB/12021msec) 00:17:08.529 slat (usec): min=41, max=2119.3k, avg=5257.97, stdev=87246.05 00:17:08.529 clat (msec): min=93, max=8793, avg=655.76, stdev=1947.68 00:17:08.529 lat (msec): min=119, max=8793, avg=661.02, stdev=1954.85 00:17:08.529 clat percentiles (msec): 00:17:08.529 | 1.00th=[ 120], 5.00th=[ 120], 10.00th=[ 121], 20.00th=[ 121], 00:17:08.529 | 30.00th=[ 122], 40.00th=[ 122], 50.00th=[ 123], 60.00th=[ 123], 00:17:08.529 | 70.00th=[ 124], 80.00th=[ 388], 90.00th=[ 460], 95.00th=[ 8658], 00:17:08.529 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:17:08.529 | 99.99th=[ 8792] 00:17:08.529 bw ( KiB/s): min= 4087, max=1079296, per=14.56%, avg=486587.11, stdev=462654.00, samples=9 00:17:08.529 iops : min= 3, max= 1054, avg=474.89, stdev=452.09, samples=9 00:17:08.529 lat (msec) : 100=0.04%, 250=77.37%, 500=14.82%, 750=1.85%, >=2000=5.91% 00:17:08.529 cpu : usr=0.03%, sys=1.98%, ctx=2434, majf=0, minf=32769 00:17:08.529 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:17:08.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.529 issued rwts: total=2267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.529 job4: (groupid=0, jobs=1): err= 0: pid=2357915: Mon Dec 9 18:05:13 2024 00:17:08.529 read: IOPS=6, BW=6945KiB/s (7111kB/s)(82.0MiB/12091msec) 00:17:08.529 slat (usec): min=464, max=2123.0k, avg=146553.47, stdev=509790.32 00:17:08.529 clat (msec): min=73, max=12089, avg=9671.09, stdev=3672.02 00:17:08.529 lat (msec): min=2112, max=12090, avg=9817.65, stdev=3520.94 00:17:08.529 clat percentiles (msec): 00:17:08.529 | 1.00th=[ 73], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 6409], 00:17:08.529 | 30.00th=[ 8658], 40.00th=[11879], 50.00th=[11879], 60.00th=[12013], 00:17:08.529 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:17:08.529 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:17:08.529 | 99.99th=[12147] 00:17:08.529 lat (msec) : 100=1.22%, >=2000=98.78% 00:17:08.529 cpu : usr=0.01%, sys=0.73%, ctx=126, majf=0, minf=20993 00:17:08.529 IO depths : 1=1.2%, 2=2.4%, 4=4.9%, 8=9.8%, 16=19.5%, 32=39.0%, >=64=23.2% 00:17:08.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.529 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:08.529 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.529 job4: (groupid=0, jobs=1): err= 0: pid=2357916: Mon Dec 9 18:05:13 2024 00:17:08.529 read: IOPS=15, BW=15.4MiB/s (16.2MB/s)(186MiB/12054msec) 00:17:08.529 slat (usec): min=523, max=2103.6k, avg=64390.56, stdev=309858.97 00:17:08.529 clat (msec): min=76, max=6971, avg=5435.78, stdev=1242.13 00:17:08.529 lat (msec): min=1692, max=8564, avg=5500.17, stdev=1198.85 00:17:08.529 clat percentiles (msec): 00:17:08.529 | 1.00th=[ 1687], 5.00th=[ 2140], 10.00th=[ 4144], 20.00th=[ 4799], 00:17:08.529 | 30.00th=[ 5067], 40.00th=[ 5403], 50.00th=[ 5604], 60.00th=[ 5873], 00:17:08.529 | 70.00th=[ 6007], 80.00th=[ 6342], 90.00th=[ 6879], 95.00th=[ 6946], 00:17:08.529 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:17:08.529 | 99.99th=[ 6946] 00:17:08.529 bw ( KiB/s): min= 6144, max=63361, per=0.88%, avg=29349.75, stdev=27089.69, samples=4 00:17:08.529 iops : min= 6, max= 61, avg=28.25, stdev=26.29, samples=4 00:17:08.529 lat (msec) : 100=0.54%, 2000=2.69%, >=2000=96.77% 00:17:08.529 cpu : usr=0.02%, sys=0.79%, ctx=449, majf=0, minf=32769 00:17:08.529 IO depths : 1=0.5%, 2=1.1%, 4=2.2%, 8=4.3%, 16=8.6%, 32=17.2%, >=64=66.1% 00:17:08.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.529 complete : 0=0.0%, 4=98.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.7% 00:17:08.529 issued rwts: total=186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.529 job4: (groupid=0, jobs=1): err= 0: pid=2357917: Mon Dec 9 18:05:13 2024 00:17:08.529 read: IOPS=35, BW=35.4MiB/s (37.2MB/s)(358MiB/10102msec) 00:17:08.529 slat (usec): min=54, max=2125.8k, avg=28030.81, stdev=199670.23 00:17:08.529 clat (msec): min=64, max=6324, avg=1640.58, stdev=1241.17 00:17:08.529 lat (msec): min=117, max=6328, avg=1668.61, stdev=1262.84 00:17:08.529 clat percentiles (msec): 00:17:08.529 | 1.00th=[ 126], 5.00th=[ 279], 10.00th=[ 776], 20.00th=[ 785], 00:17:08.529 | 30.00th=[ 793], 40.00th=[ 835], 50.00th=[ 835], 60.00th=[ 869], 00:17:08.529 | 70.00th=[ 2635], 80.00th=[ 2903], 90.00th=[ 3037], 95.00th=[ 3071], 00:17:08.529 | 99.00th=[ 6275], 99.50th=[ 6275], 99.90th=[ 6342], 99.95th=[ 6342], 00:17:08.529 | 99.99th=[ 6342] 00:17:08.529 bw ( KiB/s): min=34816, max=172032, per=2.82%, avg=94208.00, stdev=66944.86, samples=5 00:17:08.529 iops : min= 34, max= 168, avg=92.00, stdev=65.38, samples=5 00:17:08.529 lat (msec) : 100=0.28%, 250=4.47%, 500=0.28%, 1000=56.42%, >=2000=38.55% 00:17:08.529 cpu : usr=0.03%, sys=1.13%, ctx=667, majf=0, minf=32769 00:17:08.529 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.5%, 32=8.9%, >=64=82.4% 00:17:08.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.529 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:08.529 issued rwts: total=358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.529 job4: (groupid=0, jobs=1): err= 0: pid=2357918: Mon Dec 9 18:05:13 2024 00:17:08.529 read: IOPS=132, BW=133MiB/s (139MB/s)(1331MiB/10040msec) 00:17:08.529 slat (usec): min=35, max=2068.2k, avg=7509.66, stdev=85292.60 00:17:08.529 clat (msec): min=39, max=5846, avg=544.89, stdev=836.49 00:17:08.529 lat (msec): min=39, max=5848, avg=552.40, stdev=849.55 00:17:08.529 clat percentiles (msec): 00:17:08.529 | 1.00th=[ 155], 5.00th=[ 215], 10.00th=[ 220], 20.00th=[ 228], 00:17:08.529 | 30.00th=[ 249], 40.00th=[ 259], 50.00th=[ 275], 60.00th=[ 300], 00:17:08.529 | 70.00th=[ 351], 80.00th=[ 609], 90.00th=[ 1099], 95.00th=[ 1351], 00:17:08.529 | 99.00th=[ 5805], 99.50th=[ 5805], 99.90th=[ 5873], 99.95th=[ 5873], 00:17:08.529 | 99.99th=[ 5873] 00:17:08.529 bw ( KiB/s): min=90112, max=542720, per=8.20%, avg=273976.89, stdev=187514.44, samples=9 00:17:08.529 iops : min= 88, max= 530, avg=267.56, stdev=183.12, samples=9 00:17:08.529 lat (msec) : 50=0.23%, 250=30.80%, 500=43.80%, 750=7.66%, 1000=5.11% 00:17:08.529 lat (msec) : 2000=9.54%, >=2000=2.85% 00:17:08.529 cpu : usr=0.03%, sys=1.86%, ctx=2717, majf=0, minf=32769 00:17:08.529 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:17:08.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.529 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.529 issued rwts: total=1331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.529 job4: (groupid=0, jobs=1): err= 0: pid=2357919: Mon Dec 9 18:05:13 2024 00:17:08.529 read: IOPS=176, BW=176MiB/s (185MB/s)(1765MiB/10016msec) 00:17:08.529 slat (usec): min=40, max=2083.9k, avg=5660.92, stdev=70384.16 00:17:08.529 clat (msec): min=13, max=4644, avg=691.52, stdev=1058.28 00:17:08.529 lat (msec): min=15, max=4645, avg=697.18, stdev=1061.87 00:17:08.529 clat percentiles (msec): 00:17:08.529 | 1.00th=[ 31], 5.00th=[ 257], 10.00th=[ 259], 20.00th=[ 262], 00:17:08.529 | 30.00th=[ 266], 40.00th=[ 271], 50.00th=[ 481], 60.00th=[ 514], 00:17:08.529 | 70.00th=[ 523], 80.00th=[ 558], 90.00th=[ 617], 95.00th=[ 4530], 00:17:08.529 | 99.00th=[ 4597], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:17:08.529 | 99.99th=[ 4665] 00:17:08.529 bw ( KiB/s): min=22483, max=499712, per=8.09%, avg=270468.67, stdev=166040.85, samples=12 00:17:08.529 iops : min= 21, max= 488, avg=263.92, stdev=162.32, samples=12 00:17:08.529 lat (msec) : 20=0.34%, 50=1.47%, 100=0.28%, 250=0.85%, 500=47.99% 00:17:08.529 lat (msec) : 750=41.13%, >=2000=7.93% 00:17:08.529 cpu : usr=0.09%, sys=2.58%, ctx=1553, majf=0, minf=32769 00:17:08.529 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:17:08.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.529 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.529 issued rwts: total=1765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.529 job4: (groupid=0, jobs=1): err= 0: pid=2357920: Mon Dec 9 18:05:13 2024 00:17:08.529 read: IOPS=18, BW=18.2MiB/s (19.0MB/s)(217MiB/11947msec) 00:17:08.529 slat (usec): min=475, max=2117.4k, avg=46186.44, stdev=278649.41 00:17:08.529 clat (msec): min=235, max=11722, avg=6332.67, stdev=4593.97 00:17:08.529 lat (msec): min=239, max=11726, avg=6378.85, stdev=4595.91 00:17:08.529 clat percentiles (msec): 00:17:08.529 | 1.00th=[ 239], 5.00th=[ 239], 10.00th=[ 239], 20.00th=[ 241], 00:17:08.529 | 30.00th=[ 1452], 40.00th=[ 4178], 50.00th=[10268], 60.00th=[10402], 00:17:08.529 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10537], 00:17:08.529 | 99.00th=[10671], 99.50th=[11745], 99.90th=[11745], 99.95th=[11745], 00:17:08.529 | 99.99th=[11745] 00:17:08.529 bw ( KiB/s): min= 2048, max=155648, per=1.10%, avg=36725.80, stdev=66569.58, samples=5 00:17:08.529 iops : min= 2, max= 152, avg=35.60, stdev=65.15, samples=5 00:17:08.529 lat (msec) : 250=23.04%, 2000=13.82%, >=2000=63.13% 00:17:08.529 cpu : usr=0.00%, sys=0.69%, ctx=441, majf=0, minf=32769 00:17:08.529 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.7%, 16=7.4%, 32=14.7%, >=64=71.0% 00:17:08.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.529 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:17:08.529 issued rwts: total=217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.529 job4: (groupid=0, jobs=1): err= 0: pid=2357921: Mon Dec 9 18:05:13 2024 00:17:08.529 read: IOPS=36, BW=36.2MiB/s (38.0MB/s)(368MiB/10161msec) 00:17:08.529 slat (usec): min=97, max=2117.6k, avg=27328.28, stdev=197638.14 00:17:08.529 clat (msec): min=101, max=6319, avg=2051.94, stdev=1690.33 00:17:08.529 lat (msec): min=169, max=6324, avg=2079.27, stdev=1700.15 00:17:08.529 clat percentiles (msec): 00:17:08.530 | 1.00th=[ 215], 5.00th=[ 785], 10.00th=[ 785], 20.00th=[ 793], 00:17:08.530 | 30.00th=[ 827], 40.00th=[ 835], 50.00th=[ 844], 60.00th=[ 2500], 00:17:08.530 | 70.00th=[ 2735], 80.00th=[ 2970], 90.00th=[ 5067], 95.00th=[ 6275], 00:17:08.530 | 99.00th=[ 6342], 99.50th=[ 6342], 99.90th=[ 6342], 99.95th=[ 6342], 00:17:08.530 | 99.99th=[ 6342] 00:17:08.530 bw ( KiB/s): min= 6144, max=161792, per=2.94%, avg=98267.20, stdev=61102.39, samples=5 00:17:08.530 iops : min= 6, max= 158, avg=95.80, stdev=59.69, samples=5 00:17:08.530 lat (msec) : 250=1.09%, 1000=53.80%, >=2000=45.11% 00:17:08.530 cpu : usr=0.02%, sys=1.36%, ctx=698, majf=0, minf=32769 00:17:08.530 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.3%, 32=8.7%, >=64=82.9% 00:17:08.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.530 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:08.530 issued rwts: total=368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.530 job5: (groupid=0, jobs=1): err= 0: pid=2357931: Mon Dec 9 18:05:13 2024 00:17:08.530 read: IOPS=184, BW=185MiB/s (194MB/s)(1864MiB/10096msec) 00:17:08.530 slat (usec): min=40, max=2066.7k, avg=5357.60, stdev=53447.23 00:17:08.530 clat (msec): min=94, max=3793, avg=551.54, stdev=560.06 00:17:08.530 lat (msec): min=107, max=3801, avg=556.89, stdev=565.06 00:17:08.530 clat percentiles (msec): 00:17:08.530 | 1.00th=[ 255], 5.00th=[ 397], 10.00th=[ 397], 20.00th=[ 401], 00:17:08.530 | 30.00th=[ 401], 40.00th=[ 405], 50.00th=[ 414], 60.00th=[ 439], 00:17:08.530 | 70.00th=[ 514], 80.00th=[ 535], 90.00th=[ 558], 95.00th=[ 592], 00:17:08.530 | 99.00th=[ 3742], 99.50th=[ 3742], 99.90th=[ 3775], 99.95th=[ 3809], 00:17:08.530 | 99.99th=[ 3809] 00:17:08.530 bw ( KiB/s): min=122880, max=329728, per=8.11%, avg=271068.62, stdev=59323.67, samples=13 00:17:08.530 iops : min= 120, max= 322, avg=264.62, stdev=57.98, samples=13 00:17:08.530 lat (msec) : 100=0.05%, 250=0.86%, 500=67.27%, 750=28.43%, >=2000=3.38% 00:17:08.530 cpu : usr=0.21%, sys=3.13%, ctx=1638, majf=0, minf=32769 00:17:08.530 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:17:08.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.530 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.530 issued rwts: total=1864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.530 job5: (groupid=0, jobs=1): err= 0: pid=2357932: Mon Dec 9 18:05:13 2024 00:17:08.530 read: IOPS=187, BW=188MiB/s (197MB/s)(1890MiB/10078msec) 00:17:08.530 slat (usec): min=41, max=2088.1k, avg=5296.68, stdev=54475.32 00:17:08.530 clat (msec): min=55, max=3764, avg=545.85, stdev=585.21 00:17:08.530 lat (msec): min=86, max=3766, avg=551.14, stdev=590.02 00:17:08.530 clat percentiles (msec): 00:17:08.530 | 1.00th=[ 205], 5.00th=[ 257], 10.00th=[ 259], 20.00th=[ 262], 00:17:08.530 | 30.00th=[ 266], 40.00th=[ 300], 50.00th=[ 485], 60.00th=[ 550], 00:17:08.530 | 70.00th=[ 592], 80.00th=[ 651], 90.00th=[ 684], 95.00th=[ 709], 00:17:08.530 | 99.00th=[ 3742], 99.50th=[ 3742], 99.90th=[ 3775], 99.95th=[ 3775], 00:17:08.530 | 99.99th=[ 3775] 00:17:08.530 bw ( KiB/s): min=180224, max=501760, per=8.30%, avg=277488.38, stdev=126400.91, samples=13 00:17:08.530 iops : min= 176, max= 490, avg=270.85, stdev=123.53, samples=13 00:17:08.530 lat (msec) : 100=0.11%, 250=1.43%, 500=49.63%, 750=45.34%, 1000=0.05% 00:17:08.530 lat (msec) : >=2000=3.44% 00:17:08.530 cpu : usr=0.19%, sys=2.95%, ctx=1747, majf=0, minf=32769 00:17:08.530 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:17:08.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.530 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.530 issued rwts: total=1890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.530 job5: (groupid=0, jobs=1): err= 0: pid=2357933: Mon Dec 9 18:05:13 2024 00:17:08.530 read: IOPS=58, BW=58.2MiB/s (61.0MB/s)(584MiB/10035msec) 00:17:08.530 slat (usec): min=480, max=2137.9k, avg=17121.17, stdev=131410.97 00:17:08.530 clat (msec): min=33, max=4225, avg=1538.62, stdev=1099.02 00:17:08.530 lat (msec): min=37, max=4232, avg=1555.74, stdev=1104.92 00:17:08.530 clat percentiles (msec): 00:17:08.530 | 1.00th=[ 186], 5.00th=[ 584], 10.00th=[ 693], 20.00th=[ 810], 00:17:08.530 | 30.00th=[ 860], 40.00th=[ 919], 50.00th=[ 1036], 60.00th=[ 1083], 00:17:08.530 | 70.00th=[ 1267], 80.00th=[ 3004], 90.00th=[ 3138], 95.00th=[ 3306], 00:17:08.530 | 99.00th=[ 4212], 99.50th=[ 4212], 99.90th=[ 4212], 99.95th=[ 4212], 00:17:08.530 | 99.99th=[ 4212] 00:17:08.530 bw ( KiB/s): min=22528, max=190464, per=3.11%, avg=103981.89, stdev=53617.66, samples=9 00:17:08.530 iops : min= 22, max= 186, avg=101.44, stdev=52.38, samples=9 00:17:08.530 lat (msec) : 50=0.34%, 100=0.17%, 250=1.20%, 500=2.91%, 750=10.96% 00:17:08.530 lat (msec) : 1000=32.19%, 2000=25.17%, >=2000=27.05% 00:17:08.530 cpu : usr=0.03%, sys=1.44%, ctx=1708, majf=0, minf=32769 00:17:08.530 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:17:08.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.530 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:08.530 issued rwts: total=584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.530 job5: (groupid=0, jobs=1): err= 0: pid=2357934: Mon Dec 9 18:05:13 2024 00:17:08.530 read: IOPS=43, BW=43.9MiB/s (46.1MB/s)(440MiB/10012msec) 00:17:08.530 slat (usec): min=417, max=2079.5k, avg=22722.67, stdev=149748.84 00:17:08.530 clat (msec): min=11, max=4744, avg=1773.72, stdev=1369.34 00:17:08.530 lat (msec): min=12, max=4750, avg=1796.44, stdev=1381.08 00:17:08.530 clat percentiles (msec): 00:17:08.530 | 1.00th=[ 16], 5.00th=[ 31], 10.00th=[ 55], 20.00th=[ 409], 00:17:08.530 | 30.00th=[ 944], 40.00th=[ 1418], 50.00th=[ 1452], 60.00th=[ 1502], 00:17:08.530 | 70.00th=[ 3507], 80.00th=[ 3675], 90.00th=[ 3708], 95.00th=[ 3742], 00:17:08.530 | 99.00th=[ 3775], 99.50th=[ 3809], 99.90th=[ 4732], 99.95th=[ 4732], 00:17:08.530 | 99.99th=[ 4732] 00:17:08.530 bw ( KiB/s): min= 2048, max=98304, per=1.88%, avg=62902.86, stdev=30736.25, samples=7 00:17:08.530 iops : min= 2, max= 96, avg=61.43, stdev=30.02, samples=7 00:17:08.530 lat (msec) : 20=2.27%, 50=6.82%, 100=4.55%, 250=3.18%, 500=5.45% 00:17:08.530 lat (msec) : 750=2.95%, 1000=5.68%, 2000=38.86%, >=2000=30.23% 00:17:08.530 cpu : usr=0.06%, sys=1.04%, ctx=1510, majf=0, minf=32769 00:17:08.530 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.3%, >=64=85.7% 00:17:08.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.530 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:08.530 issued rwts: total=440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.530 job5: (groupid=0, jobs=1): err= 0: pid=2357935: Mon Dec 9 18:05:13 2024 00:17:08.530 read: IOPS=36, BW=36.0MiB/s (37.7MB/s)(364MiB/10111msec) 00:17:08.530 slat (usec): min=543, max=2110.5k, avg=27472.23, stdev=164465.70 00:17:08.530 clat (msec): min=107, max=4660, avg=2380.98, stdev=1339.52 00:17:08.530 lat (msec): min=122, max=4662, avg=2408.46, stdev=1343.90 00:17:08.530 clat percentiles (msec): 00:17:08.530 | 1.00th=[ 224], 5.00th=[ 376], 10.00th=[ 684], 20.00th=[ 1318], 00:17:08.530 | 30.00th=[ 1435], 40.00th=[ 1586], 50.00th=[ 1804], 60.00th=[ 3675], 00:17:08.530 | 70.00th=[ 3742], 80.00th=[ 3775], 90.00th=[ 3842], 95.00th=[ 4597], 00:17:08.530 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:17:08.530 | 99.99th=[ 4665] 00:17:08.530 bw ( KiB/s): min=20480, max=86016, per=1.81%, avg=60425.62, stdev=22155.37, samples=8 00:17:08.530 iops : min= 20, max= 84, avg=59.00, stdev=21.65, samples=8 00:17:08.530 lat (msec) : 250=1.10%, 500=5.77%, 750=4.12%, 1000=2.47%, 2000=44.78% 00:17:08.530 lat (msec) : >=2000=41.76% 00:17:08.530 cpu : usr=0.00%, sys=1.32%, ctx=1166, majf=0, minf=32769 00:17:08.530 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.8%, >=64=82.7% 00:17:08.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.530 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:08.530 issued rwts: total=364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.530 job5: (groupid=0, jobs=1): err= 0: pid=2357936: Mon Dec 9 18:05:13 2024 00:17:08.530 read: IOPS=60, BW=61.0MiB/s (64.0MB/s)(617MiB/10116msec) 00:17:08.530 slat (usec): min=68, max=2146.1k, avg=16235.84, stdev=126633.03 00:17:08.530 clat (msec): min=95, max=4392, avg=1399.03, stdev=1213.84 00:17:08.530 lat (msec): min=132, max=4396, avg=1415.26, stdev=1223.23 00:17:08.530 clat percentiles (msec): 00:17:08.530 | 1.00th=[ 163], 5.00th=[ 384], 10.00th=[ 531], 20.00th=[ 558], 00:17:08.530 | 30.00th=[ 625], 40.00th=[ 701], 50.00th=[ 802], 60.00th=[ 852], 00:17:08.530 | 70.00th=[ 1250], 80.00th=[ 3306], 90.00th=[ 3473], 95.00th=[ 3540], 00:17:08.530 | 99.00th=[ 4329], 99.50th=[ 4329], 99.90th=[ 4396], 99.95th=[ 4396], 00:17:08.530 | 99.99th=[ 4396] 00:17:08.530 bw ( KiB/s): min= 2048, max=225280, per=3.32%, avg=110881.11, stdev=73610.43, samples=9 00:17:08.530 iops : min= 2, max= 220, avg=108.22, stdev=71.84, samples=9 00:17:08.530 lat (msec) : 100=0.16%, 250=2.43%, 500=6.32%, 750=39.22%, 1000=14.10% 00:17:08.530 lat (msec) : 2000=13.94%, >=2000=23.82% 00:17:08.530 cpu : usr=0.02%, sys=1.27%, ctx=1369, majf=0, minf=32769 00:17:08.530 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8% 00:17:08.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.530 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:08.530 issued rwts: total=617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.530 job5: (groupid=0, jobs=1): err= 0: pid=2357937: Mon Dec 9 18:05:13 2024 00:17:08.530 read: IOPS=97, BW=97.9MiB/s (103MB/s)(987MiB/10077msec) 00:17:08.530 slat (usec): min=42, max=2050.1k, avg=10142.99, stdev=74160.56 00:17:08.530 clat (msec): min=57, max=3322, avg=1147.37, stdev=821.83 00:17:08.530 lat (msec): min=135, max=3328, avg=1157.52, stdev=825.00 00:17:08.530 clat percentiles (msec): 00:17:08.530 | 1.00th=[ 194], 5.00th=[ 493], 10.00th=[ 642], 20.00th=[ 651], 00:17:08.530 | 30.00th=[ 667], 40.00th=[ 726], 50.00th=[ 844], 60.00th=[ 919], 00:17:08.530 | 70.00th=[ 1011], 80.00th=[ 1183], 90.00th=[ 2970], 95.00th=[ 3138], 00:17:08.530 | 99.00th=[ 3272], 99.50th=[ 3272], 99.90th=[ 3339], 99.95th=[ 3339], 00:17:08.530 | 99.99th=[ 3339] 00:17:08.530 bw ( KiB/s): min= 2043, max=196608, per=4.05%, avg=135311.31, stdev=53896.77, samples=13 00:17:08.530 iops : min= 1, max= 192, avg=132.00, stdev=52.79, samples=13 00:17:08.530 lat (msec) : 100=0.10%, 250=1.52%, 500=3.55%, 750=36.58%, 1000=27.76% 00:17:08.530 lat (msec) : 2000=13.27%, >=2000=17.22% 00:17:08.530 cpu : usr=0.04%, sys=2.18%, ctx=1446, majf=0, minf=32769 00:17:08.530 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.6% 00:17:08.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.531 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.531 issued rwts: total=987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.531 job5: (groupid=0, jobs=1): err= 0: pid=2357938: Mon Dec 9 18:05:13 2024 00:17:08.531 read: IOPS=57, BW=57.4MiB/s (60.1MB/s)(577MiB/10059msec) 00:17:08.531 slat (usec): min=443, max=2050.1k, avg=17324.48, stdev=127644.76 00:17:08.531 clat (msec): min=57, max=5385, avg=1902.88, stdev=1813.06 00:17:08.531 lat (msec): min=58, max=5390, avg=1920.20, stdev=1817.89 00:17:08.531 clat percentiles (msec): 00:17:08.531 | 1.00th=[ 194], 5.00th=[ 422], 10.00th=[ 659], 20.00th=[ 885], 00:17:08.531 | 30.00th=[ 919], 40.00th=[ 944], 50.00th=[ 953], 60.00th=[ 986], 00:17:08.531 | 70.00th=[ 1070], 80.00th=[ 5201], 90.00th=[ 5336], 95.00th=[ 5336], 00:17:08.531 | 99.00th=[ 5336], 99.50th=[ 5336], 99.90th=[ 5403], 99.95th=[ 5403], 00:17:08.531 | 99.99th=[ 5403] 00:17:08.531 bw ( KiB/s): min= 2048, max=165888, per=2.76%, avg=92147.20, stdev=56199.08, samples=10 00:17:08.531 iops : min= 2, max= 162, avg=89.90, stdev=54.87, samples=10 00:17:08.531 lat (msec) : 100=0.35%, 250=1.91%, 500=3.99%, 750=6.59%, 1000=49.39% 00:17:08.531 lat (msec) : 2000=10.23%, >=2000=27.56% 00:17:08.531 cpu : usr=0.02%, sys=1.68%, ctx=1088, majf=0, minf=32769 00:17:08.531 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.1% 00:17:08.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.531 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:08.531 issued rwts: total=577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.531 job5: (groupid=0, jobs=1): err= 0: pid=2357939: Mon Dec 9 18:05:13 2024 00:17:08.531 read: IOPS=158, BW=159MiB/s (166MB/s)(1594MiB/10041msec) 00:17:08.531 slat (usec): min=44, max=2082.5k, avg=6277.40, stdev=72700.50 00:17:08.531 clat (msec): min=29, max=2764, avg=777.73, stdev=819.94 00:17:08.531 lat (msec): min=55, max=2767, avg=784.01, stdev=822.58 00:17:08.531 clat percentiles (msec): 00:17:08.531 | 1.00th=[ 184], 5.00th=[ 284], 10.00th=[ 296], 20.00th=[ 309], 00:17:08.531 | 30.00th=[ 313], 40.00th=[ 321], 50.00th=[ 514], 60.00th=[ 567], 00:17:08.531 | 70.00th=[ 609], 80.00th=[ 651], 90.00th=[ 2601], 95.00th=[ 2702], 00:17:08.531 | 99.00th=[ 2735], 99.50th=[ 2769], 99.90th=[ 2769], 99.95th=[ 2769], 00:17:08.531 | 99.99th=[ 2769] 00:17:08.531 bw ( KiB/s): min=12288, max=438272, per=6.91%, avg=231076.46, stdev=135691.64, samples=13 00:17:08.531 iops : min= 12, max= 428, avg=225.62, stdev=132.52, samples=13 00:17:08.531 lat (msec) : 50=0.06%, 100=0.13%, 250=1.51%, 500=47.62%, 750=34.44% 00:17:08.531 lat (msec) : 1000=0.31%, >=2000=15.93% 00:17:08.531 cpu : usr=0.04%, sys=2.27%, ctx=2021, majf=0, minf=32770 00:17:08.531 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:17:08.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.531 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.531 issued rwts: total=1594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.531 job5: (groupid=0, jobs=1): err= 0: pid=2357940: Mon Dec 9 18:05:13 2024 00:17:08.531 read: IOPS=52, BW=52.0MiB/s (54.6MB/s)(521MiB/10014msec) 00:17:08.531 slat (usec): min=73, max=2113.6k, avg=19188.19, stdev=128944.07 00:17:08.531 clat (msec): min=13, max=4142, avg=2142.23, stdev=1506.23 00:17:08.531 lat (msec): min=17, max=4162, avg=2161.42, stdev=1512.06 00:17:08.531 clat percentiles (msec): 00:17:08.531 | 1.00th=[ 27], 5.00th=[ 88], 10.00th=[ 266], 20.00th=[ 634], 00:17:08.531 | 30.00th=[ 1028], 40.00th=[ 1217], 50.00th=[ 1284], 60.00th=[ 3507], 00:17:08.531 | 70.00th=[ 3608], 80.00th=[ 3876], 90.00th=[ 4010], 95.00th=[ 4044], 00:17:08.531 | 99.00th=[ 4077], 99.50th=[ 4111], 99.90th=[ 4144], 99.95th=[ 4144], 00:17:08.531 | 99.99th=[ 4144] 00:17:08.531 bw ( KiB/s): min= 2043, max=118784, per=2.10%, avg=70261.33, stdev=38345.94, samples=9 00:17:08.531 iops : min= 1, max= 116, avg=68.22, stdev=37.62, samples=9 00:17:08.531 lat (msec) : 20=0.38%, 50=2.69%, 100=2.88%, 250=4.03%, 500=6.33% 00:17:08.531 lat (msec) : 750=5.76%, 1000=6.91%, 2000=27.45%, >=2000=43.57% 00:17:08.531 cpu : usr=0.02%, sys=1.33%, ctx=2055, majf=0, minf=32769 00:17:08.531 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.1%, >=64=87.9% 00:17:08.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.531 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:08.531 issued rwts: total=521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.531 job5: (groupid=0, jobs=1): err= 0: pid=2357941: Mon Dec 9 18:05:13 2024 00:17:08.531 read: IOPS=56, BW=56.8MiB/s (59.6MB/s)(574MiB/10102msec) 00:17:08.531 slat (usec): min=656, max=2158.9k, avg=17428.20, stdev=133021.99 00:17:08.531 clat (msec): min=95, max=4253, avg=1475.67, stdev=1071.79 00:17:08.531 lat (msec): min=114, max=4260, avg=1493.10, stdev=1079.27 00:17:08.531 clat percentiles (msec): 00:17:08.531 | 1.00th=[ 182], 5.00th=[ 550], 10.00th=[ 609], 20.00th=[ 667], 00:17:08.531 | 30.00th=[ 726], 40.00th=[ 835], 50.00th=[ 1003], 60.00th=[ 1234], 00:17:08.531 | 70.00th=[ 1368], 80.00th=[ 3037], 90.00th=[ 3205], 95.00th=[ 3373], 00:17:08.531 | 99.00th=[ 4245], 99.50th=[ 4245], 99.90th=[ 4245], 99.95th=[ 4245], 00:17:08.531 | 99.99th=[ 4245] 00:17:08.531 bw ( KiB/s): min=57344, max=198656, per=3.42%, avg=114402.25, stdev=54988.45, samples=8 00:17:08.531 iops : min= 56, max= 194, avg=111.62, stdev=53.69, samples=8 00:17:08.531 lat (msec) : 100=0.17%, 250=1.22%, 500=2.26%, 750=33.10%, 1000=13.07% 00:17:08.531 lat (msec) : 2000=25.09%, >=2000=25.09% 00:17:08.531 cpu : usr=0.02%, sys=1.16%, ctx=1654, majf=0, minf=32769 00:17:08.531 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:17:08.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.531 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:08.531 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.531 job5: (groupid=0, jobs=1): err= 0: pid=2357942: Mon Dec 9 18:05:13 2024 00:17:08.531 read: IOPS=48, BW=48.1MiB/s (50.4MB/s)(487MiB/10132msec) 00:17:08.531 slat (usec): min=423, max=2062.9k, avg=20572.16, stdev=140632.23 00:17:08.531 clat (msec): min=110, max=4370, avg=2043.31, stdev=1469.48 00:17:08.531 lat (msec): min=201, max=4373, avg=2063.88, stdev=1473.14 00:17:08.531 clat percentiles (msec): 00:17:08.531 | 1.00th=[ 243], 5.00th=[ 376], 10.00th=[ 617], 20.00th=[ 953], 00:17:08.531 | 30.00th=[ 995], 40.00th=[ 1028], 50.00th=[ 1133], 60.00th=[ 1334], 00:17:08.531 | 70.00th=[ 3708], 80.00th=[ 3910], 90.00th=[ 4144], 95.00th=[ 4279], 00:17:08.531 | 99.00th=[ 4329], 99.50th=[ 4329], 99.90th=[ 4396], 99.95th=[ 4396], 00:17:08.531 | 99.99th=[ 4396] 00:17:08.531 bw ( KiB/s): min=22483, max=137216, per=2.44%, avg=81660.56, stdev=41428.71, samples=9 00:17:08.531 iops : min= 21, max= 134, avg=79.56, stdev=40.54, samples=9 00:17:08.531 lat (msec) : 250=1.23%, 500=5.54%, 750=7.19%, 1000=17.04%, 2000=31.83% 00:17:08.531 lat (msec) : >=2000=37.17% 00:17:08.531 cpu : usr=0.05%, sys=1.38%, ctx=1479, majf=0, minf=32331 00:17:08.531 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=6.6%, >=64=87.1% 00:17:08.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.531 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:08.531 issued rwts: total=487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.531 job5: (groupid=0, jobs=1): err= 0: pid=2357943: Mon Dec 9 18:05:13 2024 00:17:08.531 read: IOPS=181, BW=181MiB/s (190MB/s)(1834MiB/10106msec) 00:17:08.531 slat (usec): min=42, max=2038.2k, avg=5451.01, stdev=48052.24 00:17:08.531 clat (msec): min=95, max=2805, avg=675.54, stdev=569.10 00:17:08.531 lat (msec): min=113, max=2807, avg=680.99, stdev=571.28 00:17:08.531 clat percentiles (msec): 00:17:08.531 | 1.00th=[ 326], 5.00th=[ 380], 10.00th=[ 380], 20.00th=[ 397], 00:17:08.531 | 30.00th=[ 481], 40.00th=[ 502], 50.00th=[ 527], 60.00th=[ 609], 00:17:08.531 | 70.00th=[ 634], 80.00th=[ 642], 90.00th=[ 676], 95.00th=[ 2702], 00:17:08.531 | 99.00th=[ 2769], 99.50th=[ 2769], 99.90th=[ 2802], 99.95th=[ 2802], 00:17:08.531 | 99.99th=[ 2802] 00:17:08.531 bw ( KiB/s): min=14336, max=342016, per=6.53%, avg=218395.19, stdev=78536.43, samples=16 00:17:08.531 iops : min= 14, max= 334, avg=213.12, stdev=76.76, samples=16 00:17:08.531 lat (msec) : 100=0.05%, 250=0.44%, 500=34.41%, 750=58.18%, >=2000=6.92% 00:17:08.531 cpu : usr=0.16%, sys=2.84%, ctx=1631, majf=0, minf=32769 00:17:08.531 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:17:08.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.531 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.531 issued rwts: total=1834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.531 00:17:08.531 Run status group 0 (all jobs): 00:17:08.531 READ: bw=3264MiB/s (3422MB/s), 1800KiB/s-248MiB/s (1843kB/s-260MB/s), io=38.6GiB (41.5GB), run=10012-12119msec 00:17:08.531 00:17:08.531 Disk stats (read/write): 00:17:08.531 nvme0n1: ios=37621/0, merge=0/0, ticks=8027894/0, in_queue=8027894, util=98.18% 00:17:08.531 nvme1n1: ios=36520/0, merge=0/0, ticks=7285893/0, in_queue=7285893, util=98.59% 00:17:08.531 nvme2n1: ios=23372/0, merge=0/0, ticks=7529508/0, in_queue=7529508, util=98.60% 00:17:08.531 nvme3n1: ios=43481/0, merge=0/0, ticks=7496929/0, in_queue=7496929, util=98.73% 00:17:08.531 nvme4n1: ios=74878/0, merge=0/0, ticks=7856989/0, in_queue=7856989, util=99.11% 00:17:08.531 nvme5n1: ios=96865/0, merge=0/0, ticks=8523061/0, in_queue=8523061, util=99.11% 00:17:08.532 18:05:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:17:08.532 18:05:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:17:08.532 18:05:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:08.532 18:05:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:17:08.532 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.532 18:05:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:17:08.532 18:05:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:17:08.532 18:05:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:08.532 18:05:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000 00:17:08.532 18:05:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:08.532 18:05:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000000 00:17:08.532 18:05:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:17:08.532 18:05:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:08.532 18:05:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.532 18:05:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:08.532 18:05:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.532 18:05:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:08.532 18:05:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:08.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.532 18:05:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:17:08.532 18:05:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:17:08.532 18:05:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:08.532 18:05:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001 00:17:08.790 18:05:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:08.790 18:05:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000001 00:17:08.790 18:05:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:17:08.790 18:05:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:08.790 18:05:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.790 18:05:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:08.790 18:05:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.790 18:05:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:08.790 18:05:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:17:09.721 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:17:09.721 18:05:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:17:09.721 18:05:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:17:09.721 18:05:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:09.721 18:05:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002 00:17:09.721 18:05:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:09.721 18:05:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000002 00:17:09.721 18:05:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:17:09.721 18:05:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:09.721 18:05:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.721 18:05:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:09.721 18:05:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.721 18:05:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:09.721 18:05:17 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:17:10.656 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:17:10.656 18:05:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:17:10.656 18:05:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:17:10.656 18:05:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:10.656 18:05:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003 00:17:10.656 18:05:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:10.656 18:05:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000003 00:17:10.656 18:05:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:17:10.656 18:05:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:17:10.656 18:05:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.656 18:05:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:10.656 18:05:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.656 18:05:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:10.656 18:05:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:17:11.591 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:17:11.591 18:05:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:17:11.591 18:05:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:17:11.591 18:05:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:11.591 18:05:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004 00:17:11.849 18:05:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:11.849 18:05:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000004 00:17:11.849 18:05:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:17:11.849 18:05:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:17:11.849 18:05:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.849 18:05:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:11.849 18:05:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.849 18:05:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:11.849 18:05:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:17:12.783 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000005 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:12.783 rmmod nvme_rdma 00:17:12.783 rmmod nvme_fabrics 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 2356350 ']' 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 2356350 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' -z 2356350 ']' 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # kill -0 2356350 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # uname 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2356350 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2356350' 00:17:12.783 killing process with pid 2356350 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # kill 2356350 00:17:12.783 18:05:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@978 -- # wait 2356350 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:13.351 00:17:13.351 real 0m34.838s 00:17:13.351 user 1m58.995s 00:17:13.351 sys 0m17.407s 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:17:13.351 ************************************ 00:17:13.351 END TEST nvmf_srq_overwhelm 00:17:13.351 ************************************ 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:13.351 ************************************ 00:17:13.351 START TEST nvmf_shutdown 00:17:13.351 ************************************ 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:17:13.351 * Looking for test storage... 00:17:13.351 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:13.351 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:13.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.611 --rc genhtml_branch_coverage=1 00:17:13.611 --rc genhtml_function_coverage=1 00:17:13.611 --rc genhtml_legend=1 00:17:13.611 --rc geninfo_all_blocks=1 00:17:13.611 --rc geninfo_unexecuted_blocks=1 00:17:13.611 00:17:13.611 ' 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:13.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.611 --rc genhtml_branch_coverage=1 00:17:13.611 --rc genhtml_function_coverage=1 00:17:13.611 --rc genhtml_legend=1 00:17:13.611 --rc geninfo_all_blocks=1 00:17:13.611 --rc geninfo_unexecuted_blocks=1 00:17:13.611 00:17:13.611 ' 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:13.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.611 --rc genhtml_branch_coverage=1 00:17:13.611 --rc genhtml_function_coverage=1 00:17:13.611 --rc genhtml_legend=1 00:17:13.611 --rc geninfo_all_blocks=1 00:17:13.611 --rc geninfo_unexecuted_blocks=1 00:17:13.611 00:17:13.611 ' 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:13.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.611 --rc genhtml_branch_coverage=1 00:17:13.611 --rc genhtml_function_coverage=1 00:17:13.611 --rc genhtml_legend=1 00:17:13.611 --rc geninfo_all_blocks=1 00:17:13.611 --rc geninfo_unexecuted_blocks=1 00:17:13.611 00:17:13.611 ' 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:13.611 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:13.611 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:13.612 ************************************ 00:17:13.612 START TEST nvmf_shutdown_tc1 00:17:13.612 ************************************ 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:17:13.612 18:05:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:21.830 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:21.830 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:21.830 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:21.831 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:21.831 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:21.831 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:21.831 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:21.831 altname enp217s0f0np0 00:17:21.831 altname ens818f0np0 00:17:21.831 inet 192.168.100.8/24 scope global mlx_0_0 00:17:21.831 valid_lft forever preferred_lft forever 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:21.831 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:21.831 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:21.831 altname enp217s0f1np1 00:17:21.831 altname ens818f1np1 00:17:21.831 inet 192.168.100.9/24 scope global mlx_0_1 00:17:21.831 valid_lft forever preferred_lft forever 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:21.831 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:21.832 192.168.100.9' 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:21.832 192.168.100.9' 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:21.832 192.168.100.9' 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2364370 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2364370 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2364370 ']' 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.832 18:05:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:21.832 [2024-12-09 18:05:28.800921] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:17:21.832 [2024-12-09 18:05:28.800991] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.832 [2024-12-09 18:05:28.892669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:21.832 [2024-12-09 18:05:28.933067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.832 [2024-12-09 18:05:28.933103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.832 [2024-12-09 18:05:28.933113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.832 [2024-12-09 18:05:28.933122] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.832 [2024-12-09 18:05:28.933128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.832 [2024-12-09 18:05:28.934775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.832 [2024-12-09 18:05:28.934886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:21.832 [2024-12-09 18:05:28.934902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:21.832 [2024-12-09 18:05:28.934910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.832 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.832 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:17:21.832 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:21.832 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:21.832 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:21.832 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.832 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:21.832 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.832 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:21.832 [2024-12-09 18:05:29.716073] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1523c80/0x1528170) succeed. 00:17:21.832 [2024-12-09 18:05:29.725506] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1525310/0x1569810) succeed. 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.090 18:05:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:22.090 Malloc1 00:17:22.090 [2024-12-09 18:05:29.964814] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:22.090 Malloc2 00:17:22.090 Malloc3 00:17:22.347 Malloc4 00:17:22.347 Malloc5 00:17:22.347 Malloc6 00:17:22.347 Malloc7 00:17:22.347 Malloc8 00:17:22.347 Malloc9 00:17:22.604 Malloc10 00:17:22.604 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.604 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:17:22.604 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:22.604 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:22.604 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2364692 00:17:22.604 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2364692 /var/tmp/bdevperf.sock 00:17:22.604 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2364692 ']' 00:17:22.604 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:22.604 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.604 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:17:22.604 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:22.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:22.604 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:22.604 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.604 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:17:22.604 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:22.604 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:17:22.604 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:22.604 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:22.604 { 00:17:22.605 "params": { 00:17:22.605 "name": "Nvme$subsystem", 00:17:22.605 "trtype": "$TEST_TRANSPORT", 00:17:22.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:22.605 "adrfam": "ipv4", 00:17:22.605 "trsvcid": "$NVMF_PORT", 00:17:22.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:22.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:22.605 "hdgst": ${hdgst:-false}, 00:17:22.605 "ddgst": ${ddgst:-false} 00:17:22.605 }, 00:17:22.605 "method": "bdev_nvme_attach_controller" 00:17:22.605 } 00:17:22.605 EOF 00:17:22.605 )") 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:22.605 { 00:17:22.605 "params": { 00:17:22.605 "name": "Nvme$subsystem", 00:17:22.605 "trtype": "$TEST_TRANSPORT", 00:17:22.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:22.605 "adrfam": "ipv4", 00:17:22.605 "trsvcid": "$NVMF_PORT", 00:17:22.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:22.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:22.605 "hdgst": ${hdgst:-false}, 00:17:22.605 "ddgst": ${ddgst:-false} 00:17:22.605 }, 00:17:22.605 "method": "bdev_nvme_attach_controller" 00:17:22.605 } 00:17:22.605 EOF 00:17:22.605 )") 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:22.605 { 00:17:22.605 "params": { 00:17:22.605 "name": "Nvme$subsystem", 00:17:22.605 "trtype": "$TEST_TRANSPORT", 00:17:22.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:22.605 "adrfam": "ipv4", 00:17:22.605 "trsvcid": "$NVMF_PORT", 00:17:22.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:22.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:22.605 "hdgst": ${hdgst:-false}, 00:17:22.605 "ddgst": ${ddgst:-false} 00:17:22.605 }, 00:17:22.605 "method": "bdev_nvme_attach_controller" 00:17:22.605 } 00:17:22.605 EOF 00:17:22.605 )") 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:22.605 { 00:17:22.605 "params": { 00:17:22.605 "name": "Nvme$subsystem", 00:17:22.605 "trtype": "$TEST_TRANSPORT", 00:17:22.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:22.605 "adrfam": "ipv4", 00:17:22.605 "trsvcid": "$NVMF_PORT", 00:17:22.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:22.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:22.605 "hdgst": ${hdgst:-false}, 00:17:22.605 "ddgst": ${ddgst:-false} 00:17:22.605 }, 00:17:22.605 "method": "bdev_nvme_attach_controller" 00:17:22.605 } 00:17:22.605 EOF 00:17:22.605 )") 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:22.605 { 00:17:22.605 "params": { 00:17:22.605 "name": "Nvme$subsystem", 00:17:22.605 "trtype": "$TEST_TRANSPORT", 00:17:22.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:22.605 "adrfam": "ipv4", 00:17:22.605 "trsvcid": "$NVMF_PORT", 00:17:22.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:22.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:22.605 "hdgst": ${hdgst:-false}, 00:17:22.605 "ddgst": ${ddgst:-false} 00:17:22.605 }, 00:17:22.605 "method": "bdev_nvme_attach_controller" 00:17:22.605 } 00:17:22.605 EOF 00:17:22.605 )") 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:22.605 { 00:17:22.605 "params": { 00:17:22.605 "name": "Nvme$subsystem", 00:17:22.605 "trtype": "$TEST_TRANSPORT", 00:17:22.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:22.605 "adrfam": "ipv4", 00:17:22.605 "trsvcid": "$NVMF_PORT", 00:17:22.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:22.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:22.605 "hdgst": ${hdgst:-false}, 00:17:22.605 "ddgst": ${ddgst:-false} 00:17:22.605 }, 00:17:22.605 "method": "bdev_nvme_attach_controller" 00:17:22.605 } 00:17:22.605 EOF 00:17:22.605 )") 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:22.605 [2024-12-09 18:05:30.458549] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:17:22.605 [2024-12-09 18:05:30.458602] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:22.605 { 00:17:22.605 "params": { 00:17:22.605 "name": "Nvme$subsystem", 00:17:22.605 "trtype": "$TEST_TRANSPORT", 00:17:22.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:22.605 "adrfam": "ipv4", 00:17:22.605 "trsvcid": "$NVMF_PORT", 00:17:22.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:22.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:22.605 "hdgst": ${hdgst:-false}, 00:17:22.605 "ddgst": ${ddgst:-false} 00:17:22.605 }, 00:17:22.605 "method": "bdev_nvme_attach_controller" 00:17:22.605 } 00:17:22.605 EOF 00:17:22.605 )") 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:22.605 { 00:17:22.605 "params": { 00:17:22.605 "name": "Nvme$subsystem", 00:17:22.605 "trtype": "$TEST_TRANSPORT", 00:17:22.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:22.605 "adrfam": "ipv4", 00:17:22.605 "trsvcid": "$NVMF_PORT", 00:17:22.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:22.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:22.605 "hdgst": ${hdgst:-false}, 00:17:22.605 "ddgst": ${ddgst:-false} 00:17:22.605 }, 00:17:22.605 "method": "bdev_nvme_attach_controller" 00:17:22.605 } 00:17:22.605 EOF 00:17:22.605 )") 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:22.605 { 00:17:22.605 "params": { 00:17:22.605 "name": "Nvme$subsystem", 00:17:22.605 "trtype": "$TEST_TRANSPORT", 00:17:22.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:22.605 "adrfam": "ipv4", 00:17:22.605 "trsvcid": "$NVMF_PORT", 00:17:22.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:22.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:22.605 "hdgst": ${hdgst:-false}, 00:17:22.605 "ddgst": ${ddgst:-false} 00:17:22.605 }, 00:17:22.605 "method": "bdev_nvme_attach_controller" 00:17:22.605 } 00:17:22.605 EOF 00:17:22.605 )") 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:22.605 { 00:17:22.605 "params": { 00:17:22.605 "name": "Nvme$subsystem", 00:17:22.605 "trtype": "$TEST_TRANSPORT", 00:17:22.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:22.605 "adrfam": "ipv4", 00:17:22.605 "trsvcid": "$NVMF_PORT", 00:17:22.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:22.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:22.605 "hdgst": ${hdgst:-false}, 00:17:22.605 "ddgst": ${ddgst:-false} 00:17:22.605 }, 00:17:22.605 "method": "bdev_nvme_attach_controller" 00:17:22.605 } 00:17:22.605 EOF 00:17:22.605 )") 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:17:22.605 18:05:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:22.605 "params": { 00:17:22.605 "name": "Nvme1", 00:17:22.605 "trtype": "rdma", 00:17:22.605 "traddr": "192.168.100.8", 00:17:22.605 "adrfam": "ipv4", 00:17:22.605 "trsvcid": "4420", 00:17:22.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:22.605 "hdgst": false, 00:17:22.605 "ddgst": false 00:17:22.605 }, 00:17:22.605 "method": "bdev_nvme_attach_controller" 00:17:22.605 },{ 00:17:22.605 "params": { 00:17:22.605 "name": "Nvme2", 00:17:22.605 "trtype": "rdma", 00:17:22.605 "traddr": "192.168.100.8", 00:17:22.606 "adrfam": "ipv4", 00:17:22.606 "trsvcid": "4420", 00:17:22.606 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:22.606 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:22.606 "hdgst": false, 00:17:22.606 "ddgst": false 00:17:22.606 }, 00:17:22.606 "method": "bdev_nvme_attach_controller" 00:17:22.606 },{ 00:17:22.606 "params": { 00:17:22.606 "name": "Nvme3", 00:17:22.606 "trtype": "rdma", 00:17:22.606 "traddr": "192.168.100.8", 00:17:22.606 "adrfam": "ipv4", 00:17:22.606 "trsvcid": "4420", 00:17:22.606 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:22.606 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:22.606 "hdgst": false, 00:17:22.606 "ddgst": false 00:17:22.606 }, 00:17:22.606 "method": "bdev_nvme_attach_controller" 00:17:22.606 },{ 00:17:22.606 "params": { 00:17:22.606 "name": "Nvme4", 00:17:22.606 "trtype": "rdma", 00:17:22.606 "traddr": "192.168.100.8", 00:17:22.606 "adrfam": "ipv4", 00:17:22.606 "trsvcid": "4420", 00:17:22.606 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:22.606 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:22.606 "hdgst": false, 00:17:22.606 "ddgst": false 00:17:22.606 }, 00:17:22.606 "method": "bdev_nvme_attach_controller" 00:17:22.606 },{ 00:17:22.606 "params": { 00:17:22.606 "name": "Nvme5", 00:17:22.606 "trtype": "rdma", 00:17:22.606 "traddr": "192.168.100.8", 00:17:22.606 "adrfam": "ipv4", 00:17:22.606 "trsvcid": "4420", 00:17:22.606 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:22.606 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:22.606 "hdgst": false, 00:17:22.606 "ddgst": false 00:17:22.606 }, 00:17:22.606 "method": "bdev_nvme_attach_controller" 00:17:22.606 },{ 00:17:22.606 "params": { 00:17:22.606 "name": "Nvme6", 00:17:22.606 "trtype": "rdma", 00:17:22.606 "traddr": "192.168.100.8", 00:17:22.606 "adrfam": "ipv4", 00:17:22.606 "trsvcid": "4420", 00:17:22.606 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:22.606 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:22.606 "hdgst": false, 00:17:22.606 "ddgst": false 00:17:22.606 }, 00:17:22.606 "method": "bdev_nvme_attach_controller" 00:17:22.606 },{ 00:17:22.606 "params": { 00:17:22.606 "name": "Nvme7", 00:17:22.606 "trtype": "rdma", 00:17:22.606 "traddr": "192.168.100.8", 00:17:22.606 "adrfam": "ipv4", 00:17:22.606 "trsvcid": "4420", 00:17:22.606 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:22.606 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:22.606 "hdgst": false, 00:17:22.606 "ddgst": false 00:17:22.606 }, 00:17:22.606 "method": "bdev_nvme_attach_controller" 00:17:22.606 },{ 00:17:22.606 "params": { 00:17:22.606 "name": "Nvme8", 00:17:22.606 "trtype": "rdma", 00:17:22.606 "traddr": "192.168.100.8", 00:17:22.606 "adrfam": "ipv4", 00:17:22.606 "trsvcid": "4420", 00:17:22.606 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:22.606 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:22.606 "hdgst": false, 00:17:22.606 "ddgst": false 00:17:22.606 }, 00:17:22.606 "method": "bdev_nvme_attach_controller" 00:17:22.606 },{ 00:17:22.606 "params": { 00:17:22.606 "name": "Nvme9", 00:17:22.606 "trtype": "rdma", 00:17:22.606 "traddr": "192.168.100.8", 00:17:22.606 "adrfam": "ipv4", 00:17:22.606 "trsvcid": "4420", 00:17:22.606 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:22.606 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:22.606 "hdgst": false, 00:17:22.606 "ddgst": false 00:17:22.606 }, 00:17:22.606 "method": "bdev_nvme_attach_controller" 00:17:22.606 },{ 00:17:22.606 "params": { 00:17:22.606 "name": "Nvme10", 00:17:22.606 "trtype": "rdma", 00:17:22.606 "traddr": "192.168.100.8", 00:17:22.606 "adrfam": "ipv4", 00:17:22.606 "trsvcid": "4420", 00:17:22.606 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:22.606 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:22.606 "hdgst": false, 00:17:22.606 "ddgst": false 00:17:22.606 }, 00:17:22.606 "method": "bdev_nvme_attach_controller" 00:17:22.606 }' 00:17:22.606 [2024-12-09 18:05:30.551928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.863 [2024-12-09 18:05:30.591297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.795 18:05:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.795 18:05:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:17:23.795 18:05:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:23.795 18:05:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.795 18:05:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:23.795 18:05:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.795 18:05:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2364692 00:17:23.795 18:05:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:17:23.795 18:05:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:17:24.726 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2364692 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2364370 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:24.726 { 00:17:24.726 "params": { 00:17:24.726 "name": "Nvme$subsystem", 00:17:24.726 "trtype": "$TEST_TRANSPORT", 00:17:24.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:24.726 "adrfam": "ipv4", 00:17:24.726 "trsvcid": "$NVMF_PORT", 00:17:24.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:24.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:24.726 "hdgst": ${hdgst:-false}, 00:17:24.726 "ddgst": ${ddgst:-false} 00:17:24.726 }, 00:17:24.726 "method": "bdev_nvme_attach_controller" 00:17:24.726 } 00:17:24.726 EOF 00:17:24.726 )") 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:24.726 { 00:17:24.726 "params": { 00:17:24.726 "name": "Nvme$subsystem", 00:17:24.726 "trtype": "$TEST_TRANSPORT", 00:17:24.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:24.726 "adrfam": "ipv4", 00:17:24.726 "trsvcid": "$NVMF_PORT", 00:17:24.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:24.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:24.726 "hdgst": ${hdgst:-false}, 00:17:24.726 "ddgst": ${ddgst:-false} 00:17:24.726 }, 00:17:24.726 "method": "bdev_nvme_attach_controller" 00:17:24.726 } 00:17:24.726 EOF 00:17:24.726 )") 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:24.726 { 00:17:24.726 "params": { 00:17:24.726 "name": "Nvme$subsystem", 00:17:24.726 "trtype": "$TEST_TRANSPORT", 00:17:24.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:24.726 "adrfam": "ipv4", 00:17:24.726 "trsvcid": "$NVMF_PORT", 00:17:24.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:24.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:24.726 "hdgst": ${hdgst:-false}, 00:17:24.726 "ddgst": ${ddgst:-false} 00:17:24.726 }, 00:17:24.726 "method": "bdev_nvme_attach_controller" 00:17:24.726 } 00:17:24.726 EOF 00:17:24.726 )") 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:24.726 { 00:17:24.726 "params": { 00:17:24.726 "name": "Nvme$subsystem", 00:17:24.726 "trtype": "$TEST_TRANSPORT", 00:17:24.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:24.726 "adrfam": "ipv4", 00:17:24.726 "trsvcid": "$NVMF_PORT", 00:17:24.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:24.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:24.726 "hdgst": ${hdgst:-false}, 00:17:24.726 "ddgst": ${ddgst:-false} 00:17:24.726 }, 00:17:24.726 "method": "bdev_nvme_attach_controller" 00:17:24.726 } 00:17:24.726 EOF 00:17:24.726 )") 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:24.726 { 00:17:24.726 "params": { 00:17:24.726 "name": "Nvme$subsystem", 00:17:24.726 "trtype": "$TEST_TRANSPORT", 00:17:24.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:24.726 "adrfam": "ipv4", 00:17:24.726 "trsvcid": "$NVMF_PORT", 00:17:24.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:24.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:24.726 "hdgst": ${hdgst:-false}, 00:17:24.726 "ddgst": ${ddgst:-false} 00:17:24.726 }, 00:17:24.726 "method": "bdev_nvme_attach_controller" 00:17:24.726 } 00:17:24.726 EOF 00:17:24.726 )") 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:24.726 { 00:17:24.726 "params": { 00:17:24.726 "name": "Nvme$subsystem", 00:17:24.726 "trtype": "$TEST_TRANSPORT", 00:17:24.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:24.726 "adrfam": "ipv4", 00:17:24.726 "trsvcid": "$NVMF_PORT", 00:17:24.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:24.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:24.726 "hdgst": ${hdgst:-false}, 00:17:24.726 "ddgst": ${ddgst:-false} 00:17:24.726 }, 00:17:24.726 "method": "bdev_nvme_attach_controller" 00:17:24.726 } 00:17:24.726 EOF 00:17:24.726 )") 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:24.726 [2024-12-09 18:05:32.507461] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:17:24.726 [2024-12-09 18:05:32.507512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364995 ] 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:24.726 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:24.726 { 00:17:24.726 "params": { 00:17:24.726 "name": "Nvme$subsystem", 00:17:24.726 "trtype": "$TEST_TRANSPORT", 00:17:24.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:24.726 "adrfam": "ipv4", 00:17:24.726 "trsvcid": "$NVMF_PORT", 00:17:24.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:24.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:24.727 "hdgst": ${hdgst:-false}, 00:17:24.727 "ddgst": ${ddgst:-false} 00:17:24.727 }, 00:17:24.727 "method": "bdev_nvme_attach_controller" 00:17:24.727 } 00:17:24.727 EOF 00:17:24.727 )") 00:17:24.727 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:24.727 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:24.727 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:24.727 { 00:17:24.727 "params": { 00:17:24.727 "name": "Nvme$subsystem", 00:17:24.727 "trtype": "$TEST_TRANSPORT", 00:17:24.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:24.727 "adrfam": "ipv4", 00:17:24.727 "trsvcid": "$NVMF_PORT", 00:17:24.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:24.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:24.727 "hdgst": ${hdgst:-false}, 00:17:24.727 "ddgst": ${ddgst:-false} 00:17:24.727 }, 00:17:24.727 "method": "bdev_nvme_attach_controller" 00:17:24.727 } 00:17:24.727 EOF 00:17:24.727 )") 00:17:24.727 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:24.727 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:24.727 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:24.727 { 00:17:24.727 "params": { 00:17:24.727 "name": "Nvme$subsystem", 00:17:24.727 "trtype": "$TEST_TRANSPORT", 00:17:24.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:24.727 "adrfam": "ipv4", 00:17:24.727 "trsvcid": "$NVMF_PORT", 00:17:24.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:24.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:24.727 "hdgst": ${hdgst:-false}, 00:17:24.727 "ddgst": ${ddgst:-false} 00:17:24.727 }, 00:17:24.727 "method": "bdev_nvme_attach_controller" 00:17:24.727 } 00:17:24.727 EOF 00:17:24.727 )") 00:17:24.727 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:24.727 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:24.727 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:24.727 { 00:17:24.727 "params": { 00:17:24.727 "name": "Nvme$subsystem", 00:17:24.727 "trtype": "$TEST_TRANSPORT", 00:17:24.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:24.727 "adrfam": "ipv4", 00:17:24.727 "trsvcid": "$NVMF_PORT", 00:17:24.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:24.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:24.727 "hdgst": ${hdgst:-false}, 00:17:24.727 "ddgst": ${ddgst:-false} 00:17:24.727 }, 00:17:24.727 "method": "bdev_nvme_attach_controller" 00:17:24.727 } 00:17:24.727 EOF 00:17:24.727 )") 00:17:24.727 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:17:24.727 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:17:24.727 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:17:24.727 18:05:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:24.727 "params": { 00:17:24.727 "name": "Nvme1", 00:17:24.727 "trtype": "rdma", 00:17:24.727 "traddr": "192.168.100.8", 00:17:24.727 "adrfam": "ipv4", 00:17:24.727 "trsvcid": "4420", 00:17:24.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:24.727 "hdgst": false, 00:17:24.727 "ddgst": false 00:17:24.727 }, 00:17:24.727 "method": "bdev_nvme_attach_controller" 00:17:24.727 },{ 00:17:24.727 "params": { 00:17:24.727 "name": "Nvme2", 00:17:24.727 "trtype": "rdma", 00:17:24.727 "traddr": "192.168.100.8", 00:17:24.727 "adrfam": "ipv4", 00:17:24.727 "trsvcid": "4420", 00:17:24.727 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:24.727 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:24.727 "hdgst": false, 00:17:24.727 "ddgst": false 00:17:24.727 }, 00:17:24.727 "method": "bdev_nvme_attach_controller" 00:17:24.727 },{ 00:17:24.727 "params": { 00:17:24.727 "name": "Nvme3", 00:17:24.727 "trtype": "rdma", 00:17:24.727 "traddr": "192.168.100.8", 00:17:24.727 "adrfam": "ipv4", 00:17:24.727 "trsvcid": "4420", 00:17:24.727 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:24.727 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:24.727 "hdgst": false, 00:17:24.727 "ddgst": false 00:17:24.727 }, 00:17:24.727 "method": "bdev_nvme_attach_controller" 00:17:24.727 },{ 00:17:24.727 "params": { 00:17:24.727 "name": "Nvme4", 00:17:24.727 "trtype": "rdma", 00:17:24.727 "traddr": "192.168.100.8", 00:17:24.727 "adrfam": "ipv4", 00:17:24.727 "trsvcid": "4420", 00:17:24.727 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:24.727 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:24.727 "hdgst": false, 00:17:24.727 "ddgst": false 00:17:24.727 }, 00:17:24.727 "method": "bdev_nvme_attach_controller" 00:17:24.727 },{ 00:17:24.727 "params": { 00:17:24.727 "name": "Nvme5", 00:17:24.727 "trtype": "rdma", 00:17:24.727 "traddr": "192.168.100.8", 00:17:24.727 "adrfam": "ipv4", 00:17:24.727 "trsvcid": "4420", 00:17:24.727 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:24.727 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:24.727 "hdgst": false, 00:17:24.727 "ddgst": false 00:17:24.727 }, 00:17:24.727 "method": "bdev_nvme_attach_controller" 00:17:24.727 },{ 00:17:24.727 "params": { 00:17:24.727 "name": "Nvme6", 00:17:24.727 "trtype": "rdma", 00:17:24.727 "traddr": "192.168.100.8", 00:17:24.727 "adrfam": "ipv4", 00:17:24.727 "trsvcid": "4420", 00:17:24.727 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:24.727 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:24.727 "hdgst": false, 00:17:24.727 "ddgst": false 00:17:24.727 }, 00:17:24.727 "method": "bdev_nvme_attach_controller" 00:17:24.727 },{ 00:17:24.727 "params": { 00:17:24.727 "name": "Nvme7", 00:17:24.727 "trtype": "rdma", 00:17:24.727 "traddr": "192.168.100.8", 00:17:24.727 "adrfam": "ipv4", 00:17:24.727 "trsvcid": "4420", 00:17:24.727 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:24.727 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:24.727 "hdgst": false, 00:17:24.727 "ddgst": false 00:17:24.727 }, 00:17:24.727 "method": "bdev_nvme_attach_controller" 00:17:24.727 },{ 00:17:24.727 "params": { 00:17:24.727 "name": "Nvme8", 00:17:24.727 "trtype": "rdma", 00:17:24.727 "traddr": "192.168.100.8", 00:17:24.727 "adrfam": "ipv4", 00:17:24.727 "trsvcid": "4420", 00:17:24.727 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:24.727 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:24.727 "hdgst": false, 00:17:24.727 "ddgst": false 00:17:24.727 }, 00:17:24.727 "method": "bdev_nvme_attach_controller" 00:17:24.727 },{ 00:17:24.727 "params": { 00:17:24.727 "name": "Nvme9", 00:17:24.727 "trtype": "rdma", 00:17:24.727 "traddr": "192.168.100.8", 00:17:24.727 "adrfam": "ipv4", 00:17:24.727 "trsvcid": "4420", 00:17:24.727 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:24.727 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:24.727 "hdgst": false, 00:17:24.727 "ddgst": false 00:17:24.727 }, 00:17:24.727 "method": "bdev_nvme_attach_controller" 00:17:24.727 },{ 00:17:24.727 "params": { 00:17:24.727 "name": "Nvme10", 00:17:24.727 "trtype": "rdma", 00:17:24.727 "traddr": "192.168.100.8", 00:17:24.727 "adrfam": "ipv4", 00:17:24.727 "trsvcid": "4420", 00:17:24.727 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:24.727 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:24.727 "hdgst": false, 00:17:24.727 "ddgst": false 00:17:24.727 }, 00:17:24.727 "method": "bdev_nvme_attach_controller" 00:17:24.727 }' 00:17:24.727 [2024-12-09 18:05:32.602253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.727 [2024-12-09 18:05:32.641559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.659 Running I/O for 1 seconds... 00:17:27.030 3557.00 IOPS, 222.31 MiB/s 00:17:27.030 Latency(us) 00:17:27.030 [2024-12-09T17:05:35.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.030 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:27.030 Verification LBA range: start 0x0 length 0x400 00:17:27.030 Nvme1n1 : 1.18 379.48 23.72 0.00 0.00 165593.44 9594.47 210554.06 00:17:27.030 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:27.030 Verification LBA range: start 0x0 length 0x400 00:17:27.030 Nvme2n1 : 1.18 379.11 23.69 0.00 0.00 162998.33 9909.04 194615.71 00:17:27.030 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:27.030 Verification LBA range: start 0x0 length 0x400 00:17:27.030 Nvme3n1 : 1.18 387.95 24.25 0.00 0.00 157183.68 5583.67 183710.52 00:17:27.030 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:27.030 Verification LBA range: start 0x0 length 0x400 00:17:27.030 Nvme4n1 : 1.18 402.79 25.17 0.00 0.00 149306.67 7916.75 130023.42 00:17:27.030 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:27.030 Verification LBA range: start 0x0 length 0x400 00:17:27.030 Nvme5n1 : 1.18 383.13 23.95 0.00 0.00 154388.03 10433.33 117440.51 00:17:27.030 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:27.030 Verification LBA range: start 0x0 length 0x400 00:17:27.030 Nvme6n1 : 1.19 404.67 25.29 0.00 0.00 144730.36 10538.19 110729.63 00:17:27.030 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:27.030 Verification LBA range: start 0x0 length 0x400 00:17:27.030 Nvme7n1 : 1.19 395.07 24.69 0.00 0.00 145933.44 10590.62 100663.30 00:17:27.030 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:27.030 Verification LBA range: start 0x0 length 0x400 00:17:27.030 Nvme8n1 : 1.19 385.54 24.10 0.00 0.00 147202.21 10433.33 98146.71 00:17:27.030 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:27.030 Verification LBA range: start 0x0 length 0x400 00:17:27.030 Nvme9n1 : 1.18 380.15 23.76 0.00 0.00 148247.11 25270.68 110729.63 00:17:27.030 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:27.030 Verification LBA range: start 0x0 length 0x400 00:17:27.030 Nvme10n1 : 1.19 268.98 16.81 0.00 0.00 206579.55 9437.18 449629.39 00:17:27.030 [2024-12-09T17:05:35.009Z] =================================================================================================================== 00:17:27.030 [2024-12-09T17:05:35.009Z] Total : 3766.87 235.43 0.00 0.00 156583.73 5583.67 449629.39 00:17:27.030 18:05:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:17:27.030 18:05:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:17:27.030 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:27.030 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:27.290 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:17:27.290 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:27.290 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:17:27.290 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:27.290 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:27.290 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:17:27.290 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:27.290 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:27.290 rmmod nvme_rdma 00:17:27.290 rmmod nvme_fabrics 00:17:27.290 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:27.290 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:17:27.290 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:17:27.290 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2364370 ']' 00:17:27.290 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2364370 00:17:27.291 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2364370 ']' 00:17:27.291 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2364370 00:17:27.291 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:17:27.291 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.291 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2364370 00:17:27.291 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:27.291 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:27.291 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2364370' 00:17:27.291 killing process with pid 2364370 00:17:27.291 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2364370 00:17:27.291 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2364370 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:27.857 00:17:27.857 real 0m14.155s 00:17:27.857 user 0m31.482s 00:17:27.857 sys 0m6.731s 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:27.857 ************************************ 00:17:27.857 END TEST nvmf_shutdown_tc1 00:17:27.857 ************************************ 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:27.857 ************************************ 00:17:27.857 START TEST nvmf_shutdown_tc2 00:17:27.857 ************************************ 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:27.857 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:27.858 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:27.858 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:27.858 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:27.858 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:27.858 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:27.859 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:27.859 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:27.859 altname enp217s0f0np0 00:17:27.859 altname ens818f0np0 00:17:27.859 inet 192.168.100.8/24 scope global mlx_0_0 00:17:27.859 valid_lft forever preferred_lft forever 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:27.859 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:27.859 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:27.859 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:27.859 altname enp217s0f1np1 00:17:27.859 altname ens818f1np1 00:17:27.859 inet 192.168.100.9/24 scope global mlx_0_1 00:17:27.859 valid_lft forever preferred_lft forever 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:28.117 192.168.100.9' 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:28.117 192.168.100.9' 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:28.117 192.168.100.9' 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2365743 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2365743 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2365743 ']' 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.117 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.118 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.118 18:05:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:28.118 [2024-12-09 18:05:36.007092] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:17:28.118 [2024-12-09 18:05:36.007148] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.375 [2024-12-09 18:05:36.100935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:28.375 [2024-12-09 18:05:36.141306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.375 [2024-12-09 18:05:36.141345] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.375 [2024-12-09 18:05:36.141354] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.375 [2024-12-09 18:05:36.141362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.375 [2024-12-09 18:05:36.141369] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.375 [2024-12-09 18:05:36.143124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.375 [2024-12-09 18:05:36.143237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:28.375 [2024-12-09 18:05:36.143347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:28.375 [2024-12-09 18:05:36.143346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.939 18:05:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.940 18:05:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:17:28.940 18:05:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:28.940 18:05:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:28.940 18:05:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:28.940 18:05:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.940 18:05:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:28.940 18:05:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.940 18:05:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:29.197 [2024-12-09 18:05:36.924026] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xca4c80/0xca9170) succeed. 00:17:29.197 [2024-12-09 18:05:36.933345] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xca6310/0xcea810) succeed. 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.197 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:29.197 Malloc1 00:17:29.197 [2024-12-09 18:05:37.172019] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:29.455 Malloc2 00:17:29.455 Malloc3 00:17:29.455 Malloc4 00:17:29.455 Malloc5 00:17:29.455 Malloc6 00:17:29.455 Malloc7 00:17:29.713 Malloc8 00:17:29.713 Malloc9 00:17:29.713 Malloc10 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2366084 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2366084 /var/tmp/bdevperf.sock 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2366084 ']' 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:29.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:29.713 { 00:17:29.713 "params": { 00:17:29.713 "name": "Nvme$subsystem", 00:17:29.713 "trtype": "$TEST_TRANSPORT", 00:17:29.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:29.713 "adrfam": "ipv4", 00:17:29.713 "trsvcid": "$NVMF_PORT", 00:17:29.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:29.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:29.713 "hdgst": ${hdgst:-false}, 00:17:29.713 "ddgst": ${ddgst:-false} 00:17:29.713 }, 00:17:29.713 "method": "bdev_nvme_attach_controller" 00:17:29.713 } 00:17:29.713 EOF 00:17:29.713 )") 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:29.713 { 00:17:29.713 "params": { 00:17:29.713 "name": "Nvme$subsystem", 00:17:29.713 "trtype": "$TEST_TRANSPORT", 00:17:29.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:29.713 "adrfam": "ipv4", 00:17:29.713 "trsvcid": "$NVMF_PORT", 00:17:29.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:29.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:29.713 "hdgst": ${hdgst:-false}, 00:17:29.713 "ddgst": ${ddgst:-false} 00:17:29.713 }, 00:17:29.713 "method": "bdev_nvme_attach_controller" 00:17:29.713 } 00:17:29.713 EOF 00:17:29.713 )") 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:29.713 { 00:17:29.713 "params": { 00:17:29.713 "name": "Nvme$subsystem", 00:17:29.713 "trtype": "$TEST_TRANSPORT", 00:17:29.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:29.713 "adrfam": "ipv4", 00:17:29.713 "trsvcid": "$NVMF_PORT", 00:17:29.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:29.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:29.713 "hdgst": ${hdgst:-false}, 00:17:29.713 "ddgst": ${ddgst:-false} 00:17:29.713 }, 00:17:29.713 "method": "bdev_nvme_attach_controller" 00:17:29.713 } 00:17:29.713 EOF 00:17:29.713 )") 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:29.713 { 00:17:29.713 "params": { 00:17:29.713 "name": "Nvme$subsystem", 00:17:29.713 "trtype": "$TEST_TRANSPORT", 00:17:29.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:29.713 "adrfam": "ipv4", 00:17:29.713 "trsvcid": "$NVMF_PORT", 00:17:29.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:29.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:29.713 "hdgst": ${hdgst:-false}, 00:17:29.713 "ddgst": ${ddgst:-false} 00:17:29.713 }, 00:17:29.713 "method": "bdev_nvme_attach_controller" 00:17:29.713 } 00:17:29.713 EOF 00:17:29.713 )") 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:29.713 { 00:17:29.713 "params": { 00:17:29.713 "name": "Nvme$subsystem", 00:17:29.713 "trtype": "$TEST_TRANSPORT", 00:17:29.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:29.713 "adrfam": "ipv4", 00:17:29.713 "trsvcid": "$NVMF_PORT", 00:17:29.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:29.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:29.713 "hdgst": ${hdgst:-false}, 00:17:29.713 "ddgst": ${ddgst:-false} 00:17:29.713 }, 00:17:29.713 "method": "bdev_nvme_attach_controller" 00:17:29.713 } 00:17:29.713 EOF 00:17:29.713 )") 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:29.713 { 00:17:29.713 "params": { 00:17:29.713 "name": "Nvme$subsystem", 00:17:29.713 "trtype": "$TEST_TRANSPORT", 00:17:29.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:29.713 "adrfam": "ipv4", 00:17:29.713 "trsvcid": "$NVMF_PORT", 00:17:29.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:29.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:29.713 "hdgst": ${hdgst:-false}, 00:17:29.713 "ddgst": ${ddgst:-false} 00:17:29.713 }, 00:17:29.713 "method": "bdev_nvme_attach_controller" 00:17:29.713 } 00:17:29.713 EOF 00:17:29.713 )") 00:17:29.713 [2024-12-09 18:05:37.662217] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:17:29.713 [2024-12-09 18:05:37.662271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366084 ] 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:29.713 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:29.713 { 00:17:29.713 "params": { 00:17:29.713 "name": "Nvme$subsystem", 00:17:29.714 "trtype": "$TEST_TRANSPORT", 00:17:29.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:29.714 "adrfam": "ipv4", 00:17:29.714 "trsvcid": "$NVMF_PORT", 00:17:29.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:29.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:29.714 "hdgst": ${hdgst:-false}, 00:17:29.714 "ddgst": ${ddgst:-false} 00:17:29.714 }, 00:17:29.714 "method": "bdev_nvme_attach_controller" 00:17:29.714 } 00:17:29.714 EOF 00:17:29.714 )") 00:17:29.714 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:29.714 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:29.714 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:29.714 { 00:17:29.714 "params": { 00:17:29.714 "name": "Nvme$subsystem", 00:17:29.714 "trtype": "$TEST_TRANSPORT", 00:17:29.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:29.714 "adrfam": "ipv4", 00:17:29.714 "trsvcid": "$NVMF_PORT", 00:17:29.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:29.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:29.714 "hdgst": ${hdgst:-false}, 00:17:29.714 "ddgst": ${ddgst:-false} 00:17:29.714 }, 00:17:29.714 "method": "bdev_nvme_attach_controller" 00:17:29.714 } 00:17:29.714 EOF 00:17:29.714 )") 00:17:29.714 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:29.714 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:29.714 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:29.714 { 00:17:29.714 "params": { 00:17:29.714 "name": "Nvme$subsystem", 00:17:29.714 "trtype": "$TEST_TRANSPORT", 00:17:29.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:29.714 "adrfam": "ipv4", 00:17:29.714 "trsvcid": "$NVMF_PORT", 00:17:29.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:29.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:29.714 "hdgst": ${hdgst:-false}, 00:17:29.714 "ddgst": ${ddgst:-false} 00:17:29.714 }, 00:17:29.714 "method": "bdev_nvme_attach_controller" 00:17:29.714 } 00:17:29.714 EOF 00:17:29.714 )") 00:17:29.714 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:29.971 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:29.971 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:29.971 { 00:17:29.971 "params": { 00:17:29.971 "name": "Nvme$subsystem", 00:17:29.971 "trtype": "$TEST_TRANSPORT", 00:17:29.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:29.971 "adrfam": "ipv4", 00:17:29.971 "trsvcid": "$NVMF_PORT", 00:17:29.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:29.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:29.971 "hdgst": ${hdgst:-false}, 00:17:29.971 "ddgst": ${ddgst:-false} 00:17:29.971 }, 00:17:29.971 "method": "bdev_nvme_attach_controller" 00:17:29.971 } 00:17:29.971 EOF 00:17:29.971 )") 00:17:29.971 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:17:29.971 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:17:29.971 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:17:29.971 18:05:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:29.971 "params": { 00:17:29.971 "name": "Nvme1", 00:17:29.971 "trtype": "rdma", 00:17:29.971 "traddr": "192.168.100.8", 00:17:29.971 "adrfam": "ipv4", 00:17:29.971 "trsvcid": "4420", 00:17:29.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.971 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:29.971 "hdgst": false, 00:17:29.971 "ddgst": false 00:17:29.971 }, 00:17:29.971 "method": "bdev_nvme_attach_controller" 00:17:29.971 },{ 00:17:29.971 "params": { 00:17:29.971 "name": "Nvme2", 00:17:29.971 "trtype": "rdma", 00:17:29.971 "traddr": "192.168.100.8", 00:17:29.971 "adrfam": "ipv4", 00:17:29.971 "trsvcid": "4420", 00:17:29.971 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:29.971 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:29.971 "hdgst": false, 00:17:29.971 "ddgst": false 00:17:29.971 }, 00:17:29.971 "method": "bdev_nvme_attach_controller" 00:17:29.971 },{ 00:17:29.971 "params": { 00:17:29.971 "name": "Nvme3", 00:17:29.971 "trtype": "rdma", 00:17:29.971 "traddr": "192.168.100.8", 00:17:29.971 "adrfam": "ipv4", 00:17:29.971 "trsvcid": "4420", 00:17:29.971 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:29.971 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:29.971 "hdgst": false, 00:17:29.972 "ddgst": false 00:17:29.972 }, 00:17:29.972 "method": "bdev_nvme_attach_controller" 00:17:29.972 },{ 00:17:29.972 "params": { 00:17:29.972 "name": "Nvme4", 00:17:29.972 "trtype": "rdma", 00:17:29.972 "traddr": "192.168.100.8", 00:17:29.972 "adrfam": "ipv4", 00:17:29.972 "trsvcid": "4420", 00:17:29.972 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:29.972 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:29.972 "hdgst": false, 00:17:29.972 "ddgst": false 00:17:29.972 }, 00:17:29.972 "method": "bdev_nvme_attach_controller" 00:17:29.972 },{ 00:17:29.972 "params": { 00:17:29.972 "name": "Nvme5", 00:17:29.972 "trtype": "rdma", 00:17:29.972 "traddr": "192.168.100.8", 00:17:29.972 "adrfam": "ipv4", 00:17:29.972 "trsvcid": "4420", 00:17:29.972 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:29.972 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:29.972 "hdgst": false, 00:17:29.972 "ddgst": false 00:17:29.972 }, 00:17:29.972 "method": "bdev_nvme_attach_controller" 00:17:29.972 },{ 00:17:29.972 "params": { 00:17:29.972 "name": "Nvme6", 00:17:29.972 "trtype": "rdma", 00:17:29.972 "traddr": "192.168.100.8", 00:17:29.972 "adrfam": "ipv4", 00:17:29.972 "trsvcid": "4420", 00:17:29.972 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:29.972 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:29.972 "hdgst": false, 00:17:29.972 "ddgst": false 00:17:29.972 }, 00:17:29.972 "method": "bdev_nvme_attach_controller" 00:17:29.972 },{ 00:17:29.972 "params": { 00:17:29.972 "name": "Nvme7", 00:17:29.972 "trtype": "rdma", 00:17:29.972 "traddr": "192.168.100.8", 00:17:29.972 "adrfam": "ipv4", 00:17:29.972 "trsvcid": "4420", 00:17:29.972 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:29.972 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:29.972 "hdgst": false, 00:17:29.972 "ddgst": false 00:17:29.972 }, 00:17:29.972 "method": "bdev_nvme_attach_controller" 00:17:29.972 },{ 00:17:29.972 "params": { 00:17:29.972 "name": "Nvme8", 00:17:29.972 "trtype": "rdma", 00:17:29.972 "traddr": "192.168.100.8", 00:17:29.972 "adrfam": "ipv4", 00:17:29.972 "trsvcid": "4420", 00:17:29.972 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:29.972 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:29.972 "hdgst": false, 00:17:29.972 "ddgst": false 00:17:29.972 }, 00:17:29.972 "method": "bdev_nvme_attach_controller" 00:17:29.972 },{ 00:17:29.972 "params": { 00:17:29.972 "name": "Nvme9", 00:17:29.972 "trtype": "rdma", 00:17:29.972 "traddr": "192.168.100.8", 00:17:29.972 "adrfam": "ipv4", 00:17:29.972 "trsvcid": "4420", 00:17:29.972 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:29.972 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:29.972 "hdgst": false, 00:17:29.972 "ddgst": false 00:17:29.972 }, 00:17:29.972 "method": "bdev_nvme_attach_controller" 00:17:29.972 },{ 00:17:29.972 "params": { 00:17:29.972 "name": "Nvme10", 00:17:29.972 "trtype": "rdma", 00:17:29.972 "traddr": "192.168.100.8", 00:17:29.972 "adrfam": "ipv4", 00:17:29.972 "trsvcid": "4420", 00:17:29.972 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:29.972 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:29.972 "hdgst": false, 00:17:29.972 "ddgst": false 00:17:29.972 }, 00:17:29.972 "method": "bdev_nvme_attach_controller" 00:17:29.972 }' 00:17:29.972 [2024-12-09 18:05:37.754296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.972 [2024-12-09 18:05:37.793331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.903 Running I/O for 10 seconds... 00:17:30.903 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.903 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:17:30.903 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:30.903 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.903 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:30.903 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.903 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:30.903 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:30.903 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:17:30.903 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:17:30.903 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:17:30.903 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:17:30.903 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:17:30.903 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:30.903 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.903 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:30.903 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:17:31.161 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.161 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=35 00:17:31.161 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 35 -ge 100 ']' 00:17:31.161 18:05:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:17:31.418 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:17:31.418 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:17:31.418 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:31.418 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:17:31.418 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.418 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:31.418 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.418 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=187 00:17:31.418 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 187 -ge 100 ']' 00:17:31.418 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:17:31.418 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:17:31.418 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:17:31.418 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2366084 00:17:31.418 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2366084 ']' 00:17:31.418 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2366084 00:17:31.418 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:17:31.676 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.676 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2366084 00:17:31.676 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.676 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.676 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2366084' 00:17:31.676 killing process with pid 2366084 00:17:31.676 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2366084 00:17:31.676 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2366084 00:17:31.676 Received shutdown signal, test time was about 0.836524 seconds 00:17:31.676 00:17:31.676 Latency(us) 00:17:31.676 [2024-12-09T17:05:39.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.676 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:31.676 Verification LBA range: start 0x0 length 0x400 00:17:31.676 Nvme1n1 : 0.82 379.65 23.73 0.00 0.00 164575.68 7392.46 234881.02 00:17:31.676 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:31.676 Verification LBA range: start 0x0 length 0x400 00:17:31.676 Nvme2n1 : 0.82 391.23 24.45 0.00 0.00 156686.97 4718.59 166933.30 00:17:31.676 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:31.676 Verification LBA range: start 0x0 length 0x400 00:17:31.676 Nvme3n1 : 0.82 388.24 24.26 0.00 0.00 154753.76 8493.47 160222.41 00:17:31.676 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:31.676 Verification LBA range: start 0x0 length 0x400 00:17:31.676 Nvme4n1 : 0.83 387.68 24.23 0.00 0.00 151968.32 8755.61 153511.53 00:17:31.676 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:31.676 Verification LBA range: start 0x0 length 0x400 00:17:31.676 Nvme5n1 : 0.83 386.99 24.19 0.00 0.00 149701.92 9332.33 142606.34 00:17:31.676 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:31.676 Verification LBA range: start 0x0 length 0x400 00:17:31.676 Nvme6n1 : 0.83 386.42 24.15 0.00 0.00 146486.64 9751.76 135056.59 00:17:31.676 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:31.676 Verification LBA range: start 0x0 length 0x400 00:17:31.676 Nvme7n1 : 0.83 385.87 24.12 0.00 0.00 143558.25 10013.90 128345.70 00:17:31.676 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:31.676 Verification LBA range: start 0x0 length 0x400 00:17:31.676 Nvme8n1 : 0.83 385.29 24.08 0.00 0.00 140927.30 10328.47 120795.96 00:17:31.676 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:31.676 Verification LBA range: start 0x0 length 0x400 00:17:31.676 Nvme9n1 : 0.83 384.63 24.04 0.00 0.00 138530.32 10905.19 109890.76 00:17:31.676 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:31.676 Verification LBA range: start 0x0 length 0x400 00:17:31.676 Nvme10n1 : 0.84 306.26 19.14 0.00 0.00 170415.41 3001.55 243269.63 00:17:31.676 [2024-12-09T17:05:39.655Z] =================================================================================================================== 00:17:31.676 [2024-12-09T17:05:39.655Z] Total : 3782.25 236.39 0.00 0.00 151349.41 3001.55 243269.63 00:17:31.933 18:05:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:17:32.865 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2365743 00:17:32.865 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:17:32.865 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:17:32.865 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:32.865 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:32.865 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:17:32.865 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:32.865 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:17:32.865 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:32.865 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:32.865 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:17:32.865 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:32.865 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:32.865 rmmod nvme_rdma 00:17:32.865 rmmod nvme_fabrics 00:17:33.123 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:33.123 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:17:33.123 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:17:33.123 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2365743 ']' 00:17:33.123 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2365743 00:17:33.123 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2365743 ']' 00:17:33.123 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2365743 00:17:33.123 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:17:33.123 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.123 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2365743 00:17:33.123 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:33.123 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:33.123 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2365743' 00:17:33.123 killing process with pid 2365743 00:17:33.123 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2365743 00:17:33.123 18:05:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2365743 00:17:33.382 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:33.382 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:33.382 00:17:33.382 real 0m5.689s 00:17:33.382 user 0m22.817s 00:17:33.382 sys 0m1.297s 00:17:33.382 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:33.382 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:33.382 ************************************ 00:17:33.382 END TEST nvmf_shutdown_tc2 00:17:33.382 ************************************ 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:33.642 ************************************ 00:17:33.642 START TEST nvmf_shutdown_tc3 00:17:33.642 ************************************ 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:33.642 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:33.642 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:33.643 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:33.643 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:33.643 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:33.643 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:33.643 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:33.643 altname enp217s0f0np0 00:17:33.643 altname ens818f0np0 00:17:33.643 inet 192.168.100.8/24 scope global mlx_0_0 00:17:33.643 valid_lft forever preferred_lft forever 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:33.643 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:33.643 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:33.643 altname enp217s0f1np1 00:17:33.643 altname ens818f1np1 00:17:33.643 inet 192.168.100.9/24 scope global mlx_0_1 00:17:33.643 valid_lft forever preferred_lft forever 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:33.643 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:33.903 192.168.100.9' 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:33.903 192.168.100.9' 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:33.903 192.168.100.9' 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:33.903 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:33.904 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:17:33.904 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:33.904 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:33.904 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:33.904 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2366863 00:17:33.904 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:33.904 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2366863 00:17:33.904 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2366863 ']' 00:17:33.904 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.904 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.904 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.904 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.904 18:05:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:33.904 [2024-12-09 18:05:41.783188] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:17:33.904 [2024-12-09 18:05:41.783236] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.904 [2024-12-09 18:05:41.871192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:34.162 [2024-12-09 18:05:41.911615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.162 [2024-12-09 18:05:41.911650] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.162 [2024-12-09 18:05:41.911659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.162 [2024-12-09 18:05:41.911668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.162 [2024-12-09 18:05:41.911675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.162 [2024-12-09 18:05:41.913515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.162 [2024-12-09 18:05:41.913630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:34.162 [2024-12-09 18:05:41.913760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.162 [2024-12-09 18:05:41.913761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:34.726 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.726 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:17:34.726 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:34.726 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:34.726 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:34.726 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.726 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:34.726 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.726 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:34.983 [2024-12-09 18:05:42.707658] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1438c80/0x143d170) succeed. 00:17:34.983 [2024-12-09 18:05:42.718109] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x143a310/0x147e810) succeed. 00:17:34.983 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.983 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:17:34.983 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:17:34.983 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.983 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:34.983 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:34.983 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:34.983 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:34.983 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.984 18:05:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:34.984 Malloc1 00:17:34.984 [2024-12-09 18:05:42.955025] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:35.241 Malloc2 00:17:35.241 Malloc3 00:17:35.241 Malloc4 00:17:35.241 Malloc5 00:17:35.241 Malloc6 00:17:35.241 Malloc7 00:17:35.499 Malloc8 00:17:35.499 Malloc9 00:17:35.499 Malloc10 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2367179 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2367179 /var/tmp/bdevperf.sock 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2367179 ']' 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:35.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:35.499 { 00:17:35.499 "params": { 00:17:35.499 "name": "Nvme$subsystem", 00:17:35.499 "trtype": "$TEST_TRANSPORT", 00:17:35.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.499 "adrfam": "ipv4", 00:17:35.499 "trsvcid": "$NVMF_PORT", 00:17:35.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.499 "hdgst": ${hdgst:-false}, 00:17:35.499 "ddgst": ${ddgst:-false} 00:17:35.499 }, 00:17:35.499 "method": "bdev_nvme_attach_controller" 00:17:35.499 } 00:17:35.499 EOF 00:17:35.499 )") 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:35.499 { 00:17:35.499 "params": { 00:17:35.499 "name": "Nvme$subsystem", 00:17:35.499 "trtype": "$TEST_TRANSPORT", 00:17:35.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.499 "adrfam": "ipv4", 00:17:35.499 "trsvcid": "$NVMF_PORT", 00:17:35.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.499 "hdgst": ${hdgst:-false}, 00:17:35.499 "ddgst": ${ddgst:-false} 00:17:35.499 }, 00:17:35.499 "method": "bdev_nvme_attach_controller" 00:17:35.499 } 00:17:35.499 EOF 00:17:35.499 )") 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:35.499 { 00:17:35.499 "params": { 00:17:35.499 "name": "Nvme$subsystem", 00:17:35.499 "trtype": "$TEST_TRANSPORT", 00:17:35.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.499 "adrfam": "ipv4", 00:17:35.499 "trsvcid": "$NVMF_PORT", 00:17:35.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.499 "hdgst": ${hdgst:-false}, 00:17:35.499 "ddgst": ${ddgst:-false} 00:17:35.499 }, 00:17:35.499 "method": "bdev_nvme_attach_controller" 00:17:35.499 } 00:17:35.499 EOF 00:17:35.499 )") 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:35.499 { 00:17:35.499 "params": { 00:17:35.499 "name": "Nvme$subsystem", 00:17:35.499 "trtype": "$TEST_TRANSPORT", 00:17:35.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.499 "adrfam": "ipv4", 00:17:35.499 "trsvcid": "$NVMF_PORT", 00:17:35.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.499 "hdgst": ${hdgst:-false}, 00:17:35.499 "ddgst": ${ddgst:-false} 00:17:35.499 }, 00:17:35.499 "method": "bdev_nvme_attach_controller" 00:17:35.499 } 00:17:35.499 EOF 00:17:35.499 )") 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:35.499 { 00:17:35.499 "params": { 00:17:35.499 "name": "Nvme$subsystem", 00:17:35.499 "trtype": "$TEST_TRANSPORT", 00:17:35.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.499 "adrfam": "ipv4", 00:17:35.499 "trsvcid": "$NVMF_PORT", 00:17:35.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.499 "hdgst": ${hdgst:-false}, 00:17:35.499 "ddgst": ${ddgst:-false} 00:17:35.499 }, 00:17:35.499 "method": "bdev_nvme_attach_controller" 00:17:35.499 } 00:17:35.499 EOF 00:17:35.499 )") 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:35.499 [2024-12-09 18:05:43.442852] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:17:35.499 [2024-12-09 18:05:43.442903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367179 ] 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:35.499 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:35.499 { 00:17:35.499 "params": { 00:17:35.499 "name": "Nvme$subsystem", 00:17:35.499 "trtype": "$TEST_TRANSPORT", 00:17:35.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.499 "adrfam": "ipv4", 00:17:35.499 "trsvcid": "$NVMF_PORT", 00:17:35.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.500 "hdgst": ${hdgst:-false}, 00:17:35.500 "ddgst": ${ddgst:-false} 00:17:35.500 }, 00:17:35.500 "method": "bdev_nvme_attach_controller" 00:17:35.500 } 00:17:35.500 EOF 00:17:35.500 )") 00:17:35.500 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:35.500 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:35.500 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:35.500 { 00:17:35.500 "params": { 00:17:35.500 "name": "Nvme$subsystem", 00:17:35.500 "trtype": "$TEST_TRANSPORT", 00:17:35.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.500 "adrfam": "ipv4", 00:17:35.500 "trsvcid": "$NVMF_PORT", 00:17:35.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.500 "hdgst": ${hdgst:-false}, 00:17:35.500 "ddgst": ${ddgst:-false} 00:17:35.500 }, 00:17:35.500 "method": "bdev_nvme_attach_controller" 00:17:35.500 } 00:17:35.500 EOF 00:17:35.500 )") 00:17:35.500 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:35.500 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:35.500 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:35.500 { 00:17:35.500 "params": { 00:17:35.500 "name": "Nvme$subsystem", 00:17:35.500 "trtype": "$TEST_TRANSPORT", 00:17:35.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.500 "adrfam": "ipv4", 00:17:35.500 "trsvcid": "$NVMF_PORT", 00:17:35.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.500 "hdgst": ${hdgst:-false}, 00:17:35.500 "ddgst": ${ddgst:-false} 00:17:35.500 }, 00:17:35.500 "method": "bdev_nvme_attach_controller" 00:17:35.500 } 00:17:35.500 EOF 00:17:35.500 )") 00:17:35.500 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:35.500 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:35.500 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:35.500 { 00:17:35.500 "params": { 00:17:35.500 "name": "Nvme$subsystem", 00:17:35.500 "trtype": "$TEST_TRANSPORT", 00:17:35.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.500 "adrfam": "ipv4", 00:17:35.500 "trsvcid": "$NVMF_PORT", 00:17:35.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.500 "hdgst": ${hdgst:-false}, 00:17:35.500 "ddgst": ${ddgst:-false} 00:17:35.500 }, 00:17:35.500 "method": "bdev_nvme_attach_controller" 00:17:35.500 } 00:17:35.500 EOF 00:17:35.500 )") 00:17:35.500 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:35.500 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:35.500 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:35.500 { 00:17:35.500 "params": { 00:17:35.500 "name": "Nvme$subsystem", 00:17:35.500 "trtype": "$TEST_TRANSPORT", 00:17:35.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.500 "adrfam": "ipv4", 00:17:35.500 "trsvcid": "$NVMF_PORT", 00:17:35.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.500 "hdgst": ${hdgst:-false}, 00:17:35.500 "ddgst": ${ddgst:-false} 00:17:35.500 }, 00:17:35.500 "method": "bdev_nvme_attach_controller" 00:17:35.500 } 00:17:35.500 EOF 00:17:35.500 )") 00:17:35.758 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:17:35.758 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:17:35.758 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:17:35.758 18:05:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:35.758 "params": { 00:17:35.758 "name": "Nvme1", 00:17:35.758 "trtype": "rdma", 00:17:35.758 "traddr": "192.168.100.8", 00:17:35.758 "adrfam": "ipv4", 00:17:35.758 "trsvcid": "4420", 00:17:35.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.758 "hdgst": false, 00:17:35.758 "ddgst": false 00:17:35.758 }, 00:17:35.758 "method": "bdev_nvme_attach_controller" 00:17:35.758 },{ 00:17:35.758 "params": { 00:17:35.758 "name": "Nvme2", 00:17:35.758 "trtype": "rdma", 00:17:35.758 "traddr": "192.168.100.8", 00:17:35.758 "adrfam": "ipv4", 00:17:35.758 "trsvcid": "4420", 00:17:35.758 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:35.758 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:35.758 "hdgst": false, 00:17:35.758 "ddgst": false 00:17:35.758 }, 00:17:35.758 "method": "bdev_nvme_attach_controller" 00:17:35.758 },{ 00:17:35.758 "params": { 00:17:35.758 "name": "Nvme3", 00:17:35.758 "trtype": "rdma", 00:17:35.758 "traddr": "192.168.100.8", 00:17:35.758 "adrfam": "ipv4", 00:17:35.758 "trsvcid": "4420", 00:17:35.758 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:35.758 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:35.758 "hdgst": false, 00:17:35.758 "ddgst": false 00:17:35.758 }, 00:17:35.758 "method": "bdev_nvme_attach_controller" 00:17:35.758 },{ 00:17:35.758 "params": { 00:17:35.758 "name": "Nvme4", 00:17:35.758 "trtype": "rdma", 00:17:35.758 "traddr": "192.168.100.8", 00:17:35.758 "adrfam": "ipv4", 00:17:35.758 "trsvcid": "4420", 00:17:35.758 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:35.758 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:35.758 "hdgst": false, 00:17:35.758 "ddgst": false 00:17:35.758 }, 00:17:35.758 "method": "bdev_nvme_attach_controller" 00:17:35.758 },{ 00:17:35.758 "params": { 00:17:35.758 "name": "Nvme5", 00:17:35.758 "trtype": "rdma", 00:17:35.758 "traddr": "192.168.100.8", 00:17:35.758 "adrfam": "ipv4", 00:17:35.758 "trsvcid": "4420", 00:17:35.758 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:35.758 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:35.758 "hdgst": false, 00:17:35.758 "ddgst": false 00:17:35.758 }, 00:17:35.758 "method": "bdev_nvme_attach_controller" 00:17:35.758 },{ 00:17:35.758 "params": { 00:17:35.758 "name": "Nvme6", 00:17:35.758 "trtype": "rdma", 00:17:35.758 "traddr": "192.168.100.8", 00:17:35.758 "adrfam": "ipv4", 00:17:35.758 "trsvcid": "4420", 00:17:35.758 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:35.758 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:35.758 "hdgst": false, 00:17:35.758 "ddgst": false 00:17:35.758 }, 00:17:35.758 "method": "bdev_nvme_attach_controller" 00:17:35.758 },{ 00:17:35.758 "params": { 00:17:35.758 "name": "Nvme7", 00:17:35.758 "trtype": "rdma", 00:17:35.758 "traddr": "192.168.100.8", 00:17:35.758 "adrfam": "ipv4", 00:17:35.758 "trsvcid": "4420", 00:17:35.758 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:35.758 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:35.758 "hdgst": false, 00:17:35.758 "ddgst": false 00:17:35.758 }, 00:17:35.758 "method": "bdev_nvme_attach_controller" 00:17:35.758 },{ 00:17:35.758 "params": { 00:17:35.758 "name": "Nvme8", 00:17:35.758 "trtype": "rdma", 00:17:35.758 "traddr": "192.168.100.8", 00:17:35.758 "adrfam": "ipv4", 00:17:35.758 "trsvcid": "4420", 00:17:35.758 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:35.758 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:35.758 "hdgst": false, 00:17:35.758 "ddgst": false 00:17:35.758 }, 00:17:35.758 "method": "bdev_nvme_attach_controller" 00:17:35.758 },{ 00:17:35.758 "params": { 00:17:35.758 "name": "Nvme9", 00:17:35.758 "trtype": "rdma", 00:17:35.758 "traddr": "192.168.100.8", 00:17:35.758 "adrfam": "ipv4", 00:17:35.758 "trsvcid": "4420", 00:17:35.758 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:35.758 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:35.758 "hdgst": false, 00:17:35.758 "ddgst": false 00:17:35.758 }, 00:17:35.758 "method": "bdev_nvme_attach_controller" 00:17:35.758 },{ 00:17:35.758 "params": { 00:17:35.758 "name": "Nvme10", 00:17:35.758 "trtype": "rdma", 00:17:35.758 "traddr": "192.168.100.8", 00:17:35.758 "adrfam": "ipv4", 00:17:35.758 "trsvcid": "4420", 00:17:35.758 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:35.758 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:35.758 "hdgst": false, 00:17:35.758 "ddgst": false 00:17:35.758 }, 00:17:35.758 "method": "bdev_nvme_attach_controller" 00:17:35.758 }' 00:17:35.758 [2024-12-09 18:05:43.534130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.758 [2024-12-09 18:05:43.573083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.690 Running I/O for 10 seconds... 00:17:36.690 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.690 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:17:36.690 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:36.691 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.691 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:36.691 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.691 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:36.691 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:36.691 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:36.691 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:17:36.691 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:17:36.691 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:17:36.691 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:17:36.691 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:17:36.691 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:36.691 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:17:36.691 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.691 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:36.948 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.948 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:17:36.948 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:17:36.948 18:05:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:17:37.205 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:17:37.205 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:17:37.205 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:37.205 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:17:37.205 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.205 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:37.205 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.205 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=155 00:17:37.205 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 155 -ge 100 ']' 00:17:37.205 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:17:37.205 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:17:37.205 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:17:37.205 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2366863 00:17:37.205 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2366863 ']' 00:17:37.205 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2366863 00:17:37.205 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:17:37.205 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.206 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2366863 00:17:37.463 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:37.463 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:37.463 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2366863' 00:17:37.463 killing process with pid 2366863 00:17:37.463 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2366863 00:17:37.463 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2366863 00:17:37.721 2582.00 IOPS, 161.38 MiB/s [2024-12-09T17:05:45.700Z] 18:05:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:17:38.288 [2024-12-09 18:05:46.260888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.288 [2024-12-09 18:05:46.260934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:17:38.288 [2024-12-09 18:05:46.260952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.288 [2024-12-09 18:05:46.260962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:17:38.288 [2024-12-09 18:05:46.260972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.288 [2024-12-09 18:05:46.260986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:17:38.288 [2024-12-09 18:05:46.260995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.288 [2024-12-09 18:05:46.261004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:17:38.288 [2024-12-09 18:05:46.263554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:38.288 [2024-12-09 18:05:46.263611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:17:38.288 [2024-12-09 18:05:46.263633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.288 [2024-12-09 18:05:46.263654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.288 [2024-12-09 18:05:46.263664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.288 [2024-12-09 18:05:46.263673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.288 [2024-12-09 18:05:46.263682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.288 [2024-12-09 18:05:46.263691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.288 [2024-12-09 18:05:46.263701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.289 [2024-12-09 18:05:46.263710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.289 [2024-12-09 18:05:46.265660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:38.289 [2024-12-09 18:05:46.265683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:17:38.289 [2024-12-09 18:05:46.265699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.289 [2024-12-09 18:05:46.265709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.289 [2024-12-09 18:05:46.265718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.289 [2024-12-09 18:05:46.265727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.289 [2024-12-09 18:05:46.265737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.289 [2024-12-09 18:05:46.265745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.289 [2024-12-09 18:05:46.265754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.289 [2024-12-09 18:05:46.265763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.550 [2024-12-09 18:05:46.267835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:38.550 [2024-12-09 18:05:46.267877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:17:38.550 [2024-12-09 18:05:46.267934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.550 [2024-12-09 18:05:46.267980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.550 [2024-12-09 18:05:46.268012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.550 [2024-12-09 18:05:46.268042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.550 [2024-12-09 18:05:46.268074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.550 [2024-12-09 18:05:46.268105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.550 [2024-12-09 18:05:46.268136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.550 [2024-12-09 18:05:46.268167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.550 [2024-12-09 18:05:46.270417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:38.550 [2024-12-09 18:05:46.270447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:17:38.550 [2024-12-09 18:05:46.270464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.550 [2024-12-09 18:05:46.270474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.550 [2024-12-09 18:05:46.270483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.550 [2024-12-09 18:05:46.270492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.270501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.270510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.270519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.270528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.272582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:38.551 [2024-12-09 18:05:46.272622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:17:38.551 [2024-12-09 18:05:46.272669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.272700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.272733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.272762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.272795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.272825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.272864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.272894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.275049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:38.551 [2024-12-09 18:05:46.275089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:17:38.551 [2024-12-09 18:05:46.275139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.275171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.275203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.275235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.275266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.275297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.275329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.275359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.277644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:38.551 [2024-12-09 18:05:46.277683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:17:38.551 [2024-12-09 18:05:46.277725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.277738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.277750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.277762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.277774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.277786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.277798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.277810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.279673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:38.551 [2024-12-09 18:05:46.279712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:38.551 [2024-12-09 18:05:46.279758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.279797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.279830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.279867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.279879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.279891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.279904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.279916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.282617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:38.551 [2024-12-09 18:05:46.282658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:17:38.551 [2024-12-09 18:05:46.282707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.282740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.282772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.282802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.282833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.282864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.282895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.551 [2024-12-09 18:05:46.282926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32763 cdw0:1 sqhd:d990 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.285279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:38.551 [2024-12-09 18:05:46.285319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:17:38.551 [2024-12-09 18:05:46.287855] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:17:38.551 [2024-12-09 18:05:46.290346] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:17:38.551 [2024-12-09 18:05:46.292732] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:17:38.551 [2024-12-09 18:05:46.295102] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:17:38.551 [2024-12-09 18:05:46.297382] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:17:38.551 [2024-12-09 18:05:46.299622] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:17:38.551 [2024-12-09 18:05:46.301840] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:17:38.551 [2024-12-09 18:05:46.304143] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:17:38.551 [2024-12-09 18:05:46.306146] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:17:38.551 [2024-12-09 18:05:46.306375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ebf680 len:0x10000 key:0x184800 00:17:38.551 [2024-12-09 18:05:46.306415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.306465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eaf600 len:0x10000 key:0x184800 00:17:38.551 [2024-12-09 18:05:46.306499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.306542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e9f580 len:0x10000 key:0x184800 00:17:38.551 [2024-12-09 18:05:46.306575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.306618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e8f500 len:0x10000 key:0x184800 00:17:38.551 [2024-12-09 18:05:46.306650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.306694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e7f480 len:0x10000 key:0x184800 00:17:38.551 [2024-12-09 18:05:46.306726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.306769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e6f400 len:0x10000 key:0x184800 00:17:38.551 [2024-12-09 18:05:46.306801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.551 [2024-12-09 18:05:46.306844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e5f380 len:0x10000 key:0x184800 00:17:38.552 [2024-12-09 18:05:46.306875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.306918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e4f300 len:0x10000 key:0x184800 00:17:38.552 [2024-12-09 18:05:46.307010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.307055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e3f280 len:0x10000 key:0x184800 00:17:38.552 [2024-12-09 18:05:46.307088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.307130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e2f200 len:0x10000 key:0x184800 00:17:38.552 [2024-12-09 18:05:46.307163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.307213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e1f180 len:0x10000 key:0x184800 00:17:38.552 [2024-12-09 18:05:46.307245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.307288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e0f100 len:0x10000 key:0x184800 00:17:38.552 [2024-12-09 18:05:46.307321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.307363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031f0000 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.307396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.307438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031dff80 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.307470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.307513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031cff00 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.307545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.307587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031bfe80 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.307620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.307662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031afe00 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.307695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.307737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100319fd80 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.307770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.307812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100318fd00 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.307844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.307886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100317fc80 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.307918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.307973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100316fc00 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.308006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.308053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100315fb80 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.308086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.308128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100314fb00 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.308160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.308203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100313fa80 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.308235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.308278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100312fa00 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.308310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.308353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100311f980 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.308385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.308428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100310f900 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.308459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.308503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ff880 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.308534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.308577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ef800 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.308609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.308651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030df780 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.308684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.308726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030cf700 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.308758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.308801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030bf680 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.308833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.308876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030af600 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.308912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.308967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100309f580 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.309000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.309043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100308f500 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.309075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.309118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100307f480 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.309150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.309194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100306f400 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.309226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.309269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100305f380 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.309301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.309343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100304f300 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.309375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.309418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100303f280 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.309450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.309492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100302f200 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.309524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.309567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100301f180 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.309599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.309642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100300f100 len:0x10000 key:0x184b00 00:17:38.552 [2024-12-09 18:05:46.309673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.552 [2024-12-09 18:05:46.309716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033f0000 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.309759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.309802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033dff80 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.309834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.309877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033cff00 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.309909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.309964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033bfe80 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.309997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.310040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033afe00 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.310072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.310114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100339fd80 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.310146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.310189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100338fd00 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.310221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.310271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100337fc80 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.310303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.310346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100336fc00 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.310378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.310420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100335fb80 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.310452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.310495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100334fb00 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.310526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.310569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100333fa80 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.310606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.310649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100332fa00 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.310681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.310726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100331f980 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.310759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.310803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100330f900 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.310834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.310877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ff880 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.310909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.310962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ef800 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.310995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.311038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032df780 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.311071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.311114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032cf700 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.311146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.311189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032bf680 len:0x10000 key:0x182e00 00:17:38.553 [2024-12-09 18:05:46.311221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.311265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ecf700 len:0x10000 key:0x184800 00:17:38.553 [2024-12-09 18:05:46.311296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9a0ba000 sqhd:7210 p:0 m:0 dnr:0 00:17:38.553 [2024-12-09 18:05:46.342764] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:17:38.553 [2024-12-09 18:05:46.342910] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:17:38.553 [2024-12-09 18:05:46.342957] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:17:38.553 [2024-12-09 18:05:46.342991] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:17:38.553 [2024-12-09 18:05:46.343026] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:17:38.553 [2024-12-09 18:05:46.343057] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:17:38.553 [2024-12-09 18:05:46.343089] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:17:38.553 [2024-12-09 18:05:46.343119] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:17:38.553 [2024-12-09 18:05:46.343149] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:17:38.553 [2024-12-09 18:05:46.343181] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:17:38.553 [2024-12-09 18:05:46.343210] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:17:38.553 [2024-12-09 18:05:46.358441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:38.553 [2024-12-09 18:05:46.358481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:17:38.553 [2024-12-09 18:05:46.358502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:17:38.553 [2024-12-09 18:05:46.358567] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:17:38.553 [2024-12-09 18:05:46.358591] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:17:38.553 [2024-12-09 18:05:46.358611] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:17:38.553 [2024-12-09 18:05:46.358632] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:17:38.553 [2024-12-09 18:05:46.358653] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:17:38.553 [2024-12-09 18:05:46.358677] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:17:38.553 [2024-12-09 18:05:46.358697] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:17:38.553 [2024-12-09 18:05:46.359157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:17:38.553 [2024-12-09 18:05:46.359183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:17:38.553 [2024-12-09 18:05:46.359204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:17:38.553 [2024-12-09 18:05:46.362302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:17:38.553 [2024-12-09 18:05:46.362330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:17:38.553 task offset: 35840 on job bdev=Nvme1n1 fails 00:17:38.553 00:17:38.553 Latency(us) 00:17:38.553 [2024-12-09T17:05:46.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.553 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:38.553 Job: Nvme1n1 ended in about 1.88 seconds with error 00:17:38.553 Verification LBA range: start 0x0 length 0x400 00:17:38.553 Nvme1n1 : 1.88 136.46 8.53 34.12 0.00 370791.55 29360.13 1060320.05 00:17:38.553 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:38.553 Job: Nvme2n1 ended in about 1.88 seconds with error 00:17:38.553 Verification LBA range: start 0x0 length 0x400 00:17:38.553 Nvme2n1 : 1.88 136.31 8.52 34.08 0.00 367292.58 32925.29 1053609.16 00:17:38.553 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:38.553 Job: Nvme3n1 ended in about 1.88 seconds with error 00:17:38.553 Verification LBA range: start 0x0 length 0x400 00:17:38.553 Nvme3n1 : 1.88 161.17 10.07 34.04 0.00 317803.64 3591.37 1053609.16 00:17:38.553 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:38.553 Job: Nvme4n1 ended in about 1.88 seconds with error 00:17:38.553 Verification LBA range: start 0x0 length 0x400 00:17:38.553 Nvme4n1 : 1.88 152.50 9.53 34.01 0.00 329795.08 14994.64 1053609.16 00:17:38.554 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:38.554 Job: Nvme5n1 ended in about 1.88 seconds with error 00:17:38.554 Verification LBA range: start 0x0 length 0x400 00:17:38.554 Nvme5n1 : 1.88 140.13 8.76 33.97 0.00 350315.65 21390.95 1060320.05 00:17:38.554 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:38.554 Job: Nvme6n1 ended in about 1.89 seconds with error 00:17:38.554 Verification LBA range: start 0x0 length 0x400 00:17:38.554 Nvme6n1 : 1.89 135.76 8.48 33.94 0.00 356062.99 36909.88 1053609.16 00:17:38.554 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:38.554 Job: Nvme7n1 ended in about 1.89 seconds with error 00:17:38.554 Verification LBA range: start 0x0 length 0x400 00:17:38.554 Nvme7n1 : 1.89 135.66 8.48 33.92 0.00 353073.89 52219.08 1053609.16 00:17:38.554 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:38.554 Job: Nvme8n1 ended in about 1.89 seconds with error 00:17:38.554 Verification LBA range: start 0x0 length 0x400 00:17:38.554 Nvme8n1 : 1.89 135.57 8.47 33.89 0.00 350011.39 65850.57 1053609.16 00:17:38.554 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:38.554 Job: Nvme9n1 ended in about 1.89 seconds with error 00:17:38.554 Verification LBA range: start 0x0 length 0x400 00:17:38.554 Nvme9n1 : 1.89 135.48 8.47 33.87 0.00 346932.18 51170.51 1053609.16 00:17:38.554 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:38.554 Job: Nvme10n1 ended in about 1.84 seconds with error 00:17:38.554 Verification LBA range: start 0x0 length 0x400 00:17:38.554 Nvme10n1 : 1.84 104.14 6.51 34.71 0.00 420809.11 51589.94 1067030.94 00:17:38.554 [2024-12-09T17:05:46.533Z] =================================================================================================================== 00:17:38.554 [2024-12-09T17:05:46.533Z] Total : 1373.17 85.82 340.54 0.00 354176.08 3591.37 1067030.94 00:17:38.554 [2024-12-09 18:05:46.390173] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:38.554 [2024-12-09 18:05:46.390197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:17:38.554 [2024-12-09 18:05:46.390211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:17:38.554 [2024-12-09 18:05:46.401496] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:38.554 [2024-12-09 18:05:46.401555] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:38.554 [2024-12-09 18:05:46.401584] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:17:38.554 [2024-12-09 18:05:46.401716] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:38.554 [2024-12-09 18:05:46.401751] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:38.554 [2024-12-09 18:05:46.401775] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170e3200 00:17:38.554 [2024-12-09 18:05:46.401883] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:38.554 [2024-12-09 18:05:46.401927] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:38.554 [2024-12-09 18:05:46.402000] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170d8d40 00:17:38.554 [2024-12-09 18:05:46.408832] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:38.554 [2024-12-09 18:05:46.408886] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:38.554 [2024-12-09 18:05:46.408914] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017089e00 00:17:38.554 [2024-12-09 18:05:46.409129] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:38.554 [2024-12-09 18:05:46.409159] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:38.554 [2024-12-09 18:05:46.409180] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170cb580 00:17:38.554 [2024-12-09 18:05:46.409296] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:38.554 [2024-12-09 18:05:46.409324] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:38.554 [2024-12-09 18:05:46.409343] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170cc0c0 00:17:38.554 [2024-12-09 18:05:46.410532] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:38.554 [2024-12-09 18:05:46.410568] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:38.554 [2024-12-09 18:05:46.410588] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001708d2c0 00:17:38.554 [2024-12-09 18:05:46.410697] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:38.554 [2024-12-09 18:05:46.410726] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:38.554 [2024-12-09 18:05:46.410746] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001707e000 00:17:38.554 [2024-12-09 18:05:46.410839] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:38.554 [2024-12-09 18:05:46.410866] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:38.554 [2024-12-09 18:05:46.410886] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170bf380 00:17:38.554 [2024-12-09 18:05:46.410980] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:38.554 [2024-12-09 18:05:46.411009] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:38.554 [2024-12-09 18:05:46.411029] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017052c40 00:17:38.813 18:05:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2367179 00:17:38.813 18:05:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:17:38.813 18:05:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2367179 00:17:38.813 18:05:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:17:38.813 18:05:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.813 18:05:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:17:38.813 18:05:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:38.813 18:05:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2367179 00:17:39.749 [2024-12-09 18:05:47.405925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:39.749 [2024-12-09 18:05:47.405996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:39.749 [2024-12-09 18:05:47.407559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:39.749 [2024-12-09 18:05:47.407602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:17:39.749 [2024-12-09 18:05:47.409215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:39.749 [2024-12-09 18:05:47.409256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:17:39.749 [2024-12-09 18:05:47.409367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:17:39.749 [2024-12-09 18:05:47.409399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:17:39.750 [2024-12-09 18:05:47.409430] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:17:39.750 [2024-12-09 18:05:47.409465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:17:39.750 [2024-12-09 18:05:47.409512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:17:39.750 [2024-12-09 18:05:47.409521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:17:39.750 [2024-12-09 18:05:47.409529] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:17:39.750 [2024-12-09 18:05:47.409537] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:17:39.750 [2024-12-09 18:05:47.409549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:17:39.750 [2024-12-09 18:05:47.409558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:17:39.750 [2024-12-09 18:05:47.409566] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:17:39.750 [2024-12-09 18:05:47.409574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:17:39.750 [2024-12-09 18:05:47.413252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:39.750 [2024-12-09 18:05:47.413301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:17:39.750 [2024-12-09 18:05:47.414655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:39.750 [2024-12-09 18:05:47.414696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:17:39.750 [2024-12-09 18:05:47.416271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:39.750 [2024-12-09 18:05:47.416312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:17:39.750 [2024-12-09 18:05:47.417594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:39.750 [2024-12-09 18:05:47.417634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:17:39.750 [2024-12-09 18:05:47.418975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:39.750 [2024-12-09 18:05:47.419014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:17:39.750 [2024-12-09 18:05:47.420614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:39.750 [2024-12-09 18:05:47.420654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:17:39.750 [2024-12-09 18:05:47.421945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:39.750 [2024-12-09 18:05:47.421995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:17:39.750 [2024-12-09 18:05:47.422022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:17:39.750 [2024-12-09 18:05:47.422050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:17:39.750 [2024-12-09 18:05:47.422080] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:17:39.750 [2024-12-09 18:05:47.422112] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:17:39.750 [2024-12-09 18:05:47.422227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:17:39.750 [2024-12-09 18:05:47.422261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:17:39.750 [2024-12-09 18:05:47.422290] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:17:39.750 [2024-12-09 18:05:47.422321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:17:39.750 [2024-12-09 18:05:47.422359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:17:39.750 [2024-12-09 18:05:47.422388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:17:39.750 [2024-12-09 18:05:47.422417] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:17:39.750 [2024-12-09 18:05:47.422447] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:17:39.750 [2024-12-09 18:05:47.422482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:17:39.750 [2024-12-09 18:05:47.422510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:17:39.750 [2024-12-09 18:05:47.422539] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:17:39.750 [2024-12-09 18:05:47.422569] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:17:39.750 [2024-12-09 18:05:47.422606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:17:39.750 [2024-12-09 18:05:47.422635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:17:39.750 [2024-12-09 18:05:47.422664] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:17:39.750 [2024-12-09 18:05:47.422693] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:17:39.750 [2024-12-09 18:05:47.422730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:17:39.750 [2024-12-09 18:05:47.422759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:17:39.750 [2024-12-09 18:05:47.422795] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:17:39.750 [2024-12-09 18:05:47.422825] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:17:39.750 [2024-12-09 18:05:47.422860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:17:39.750 [2024-12-09 18:05:47.422889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:17:39.750 [2024-12-09 18:05:47.422918] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:17:39.750 [2024-12-09 18:05:47.422959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:39.750 rmmod nvme_rdma 00:17:39.750 rmmod nvme_fabrics 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2366863 ']' 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2366863 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2366863 ']' 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2366863 00:17:39.750 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2366863) - No such process 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2366863 is not found' 00:17:39.750 Process with pid 2366863 is not found 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:39.750 00:17:39.750 real 0m6.204s 00:17:39.750 user 0m18.918s 00:17:39.750 sys 0m1.472s 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:39.750 ************************************ 00:17:39.750 END TEST nvmf_shutdown_tc3 00:17:39.750 ************************************ 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.750 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:40.011 ************************************ 00:17:40.011 START TEST nvmf_shutdown_tc4 00:17:40.011 ************************************ 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:40.011 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:40.011 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:40.012 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:40.012 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:40.012 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:40.012 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:40.012 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:40.012 altname enp217s0f0np0 00:17:40.012 altname ens818f0np0 00:17:40.012 inet 192.168.100.8/24 scope global mlx_0_0 00:17:40.012 valid_lft forever preferred_lft forever 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:40.012 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:40.012 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:40.012 altname enp217s0f1np1 00:17:40.012 altname ens818f1np1 00:17:40.012 inet 192.168.100.9/24 scope global mlx_0_1 00:17:40.012 valid_lft forever preferred_lft forever 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:40.012 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:40.013 192.168.100.9' 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:40.013 192.168.100.9' 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:40.013 192.168.100.9' 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:40.013 18:05:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:40.272 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:17:40.272 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:40.272 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:40.272 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:40.272 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2368089 00:17:40.272 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2368089 00:17:40.272 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:40.272 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2368089 ']' 00:17:40.272 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.272 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.272 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.272 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.272 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:40.272 [2024-12-09 18:05:48.066185] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:17:40.272 [2024-12-09 18:05:48.066232] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.272 [2024-12-09 18:05:48.154879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:40.272 [2024-12-09 18:05:48.194835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.272 [2024-12-09 18:05:48.194873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.272 [2024-12-09 18:05:48.194882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.272 [2024-12-09 18:05:48.194890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.272 [2024-12-09 18:05:48.194913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.272 [2024-12-09 18:05:48.196617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.272 [2024-12-09 18:05:48.196748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.272 [2024-12-09 18:05:48.196856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.272 [2024-12-09 18:05:48.196857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:41.206 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.206 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:17:41.206 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:41.206 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:41.206 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:41.206 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.206 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:41.206 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.206 18:05:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:41.206 [2024-12-09 18:05:48.990690] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdb2c80/0xdb7170) succeed. 00:17:41.206 [2024-12-09 18:05:49.000000] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdb4310/0xdf8810) succeed. 00:17:41.206 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.206 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:17:41.206 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:17:41.206 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:41.206 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:41.206 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:41.206 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.206 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:41.206 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.206 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:41.206 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.206 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:41.207 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.207 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:41.207 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.207 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:41.207 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.207 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:41.207 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.207 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:41.465 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.465 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:41.465 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.465 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:41.465 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:41.465 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:41.465 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:17:41.465 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.465 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:41.465 Malloc1 00:17:41.465 [2024-12-09 18:05:49.244101] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:41.465 Malloc2 00:17:41.465 Malloc3 00:17:41.465 Malloc4 00:17:41.465 Malloc5 00:17:41.723 Malloc6 00:17:41.723 Malloc7 00:17:41.723 Malloc8 00:17:41.723 Malloc9 00:17:41.723 Malloc10 00:17:41.723 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.723 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:17:41.723 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:41.723 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:41.723 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2368413 00:17:41.723 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:17:41.723 18:05:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:17:41.981 [2024-12-09 18:05:49.793767] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:47.248 18:05:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:47.248 18:05:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2368089 00:17:47.248 18:05:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2368089 ']' 00:17:47.248 18:05:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2368089 00:17:47.248 18:05:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:17:47.248 18:05:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.248 18:05:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2368089 00:17:47.248 18:05:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:47.248 18:05:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:47.248 18:05:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2368089' 00:17:47.248 killing process with pid 2368089 00:17:47.248 18:05:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2368089 00:17:47.248 18:05:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2368089 00:17:47.248 NVMe io qpair process completion error 00:17:47.248 NVMe io qpair process completion error 00:17:47.248 NVMe io qpair process completion error 00:17:47.248 NVMe io qpair process completion error 00:17:47.248 NVMe io qpair process completion error 00:17:47.508 18:05:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 starting I/O failed: -6 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.078 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 [2024-12-09 18:05:55.871510] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 [2024-12-09 18:05:55.884238] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.079 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 starting I/O failed: -6 00:17:48.080 [2024-12-09 18:05:55.897429] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.080 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 starting I/O failed: -6 00:17:48.081 [2024-12-09 18:05:55.910914] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 Write completed with error (sct=0, sc=8) 00:17:48.081 [2024-12-09 18:05:55.924032] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:17:48.081 NVMe io qpair process completion error 00:17:48.081 NVMe io qpair process completion error 00:17:48.081 NVMe io qpair process completion error 00:17:48.081 NVMe io qpair process completion error 00:17:48.081 NVMe io qpair process completion error 00:17:48.651 18:05:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2368413 00:17:48.651 18:05:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:17:48.651 18:05:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2368413 00:17:48.651 18:05:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:17:48.651 18:05:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.651 18:05:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:17:48.651 18:05:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.651 18:05:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2368413 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 [2024-12-09 18:05:56.928029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:49.221 [2024-12-09 18:05:56.928099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 [2024-12-09 18:05:56.930549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 [2024-12-09 18:05:56.930597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.221 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 [2024-12-09 18:05:56.942206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:49.222 [2024-12-09 18:05:56.942252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 [2024-12-09 18:05:56.944745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:49.222 [2024-12-09 18:05:56.944786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 [2024-12-09 18:05:56.954770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:49.222 [2024-12-09 18:05:56.954840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.222 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 [2024-12-09 18:05:56.968985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:49.223 [2024-12-09 18:05:56.969055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 [2024-12-09 18:05:56.971047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:49.223 [2024-12-09 18:05:56.971092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.223 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 [2024-12-09 18:05:56.982218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 [2024-12-09 18:05:56.982287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 [2024-12-09 18:05:56.984817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:49.224 [2024-12-09 18:05:56.984861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 Write completed with error (sct=0, sc=8) 00:17:49.224 [2024-12-09 18:05:57.022724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:49.224 [2024-12-09 18:05:57.022789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:17:49.224 Initializing NVMe Controllers 00:17:49.224 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:17:49.224 Controller IO queue size 128, less than required. 00:17:49.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.224 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:17:49.224 Controller IO queue size 128, less than required. 00:17:49.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.224 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:17:49.224 Controller IO queue size 128, less than required. 00:17:49.225 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.225 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:49.225 Controller IO queue size 128, less than required. 00:17:49.225 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.225 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:17:49.225 Controller IO queue size 128, less than required. 00:17:49.225 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.225 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:17:49.225 Controller IO queue size 128, less than required. 00:17:49.225 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.225 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:17:49.225 Controller IO queue size 128, less than required. 00:17:49.225 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.225 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:17:49.225 Controller IO queue size 128, less than required. 00:17:49.225 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.225 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:17:49.225 Controller IO queue size 128, less than required. 00:17:49.225 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.225 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:17:49.225 Controller IO queue size 128, less than required. 00:17:49.225 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.225 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:17:49.225 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:17:49.225 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:17:49.225 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:49.225 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:17:49.225 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:17:49.225 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:17:49.225 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:17:49.225 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:17:49.225 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:17:49.225 Initialization complete. Launching workers. 00:17:49.225 ======================================================== 00:17:49.225 Latency(us) 00:17:49.225 Device Information : IOPS MiB/s Average min max 00:17:49.225 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1538.93 66.13 82327.93 115.06 1168234.66 00:17:49.225 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1533.06 65.87 82728.71 118.76 1196683.12 00:17:49.225 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1555.87 66.85 96001.45 119.20 2175267.44 00:17:49.225 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1524.17 65.49 83290.40 115.77 1228310.33 00:17:49.225 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1542.62 66.28 96892.45 114.49 2213352.02 00:17:49.225 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1535.91 66.00 97440.86 115.74 2232804.13 00:17:49.225 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1526.01 65.57 83203.79 118.61 1206745.49 00:17:49.225 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1584.55 68.09 94515.94 114.71 2068979.40 00:17:49.225 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1514.44 65.07 83916.28 114.49 1233203.61 00:17:49.225 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1552.85 66.72 96524.87 114.62 2183772.45 00:17:49.225 ======================================================== 00:17:49.225 Total : 15408.39 662.08 89735.28 114.49 2232804.13 00:17:49.225 00:17:49.225 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:49.225 rmmod nvme_rdma 00:17:49.225 rmmod nvme_fabrics 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2368089 ']' 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2368089 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2368089 ']' 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2368089 00:17:49.225 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2368089) - No such process 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2368089 is not found' 00:17:49.225 Process with pid 2368089 is not found 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:49.225 00:17:49.225 real 0m9.370s 00:17:49.225 user 0m34.958s 00:17:49.225 sys 0m1.421s 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:49.225 ************************************ 00:17:49.225 END TEST nvmf_shutdown_tc4 00:17:49.225 ************************************ 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:17:49.225 00:17:49.225 real 0m36.016s 00:17:49.225 user 1m48.430s 00:17:49.225 sys 0m11.308s 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.225 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:49.225 ************************************ 00:17:49.225 END TEST nvmf_shutdown 00:17:49.225 ************************************ 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:49.485 ************************************ 00:17:49.485 START TEST nvmf_nsid 00:17:49.485 ************************************ 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:17:49.485 * Looking for test storage... 00:17:49.485 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:17:49.485 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:49.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.486 --rc genhtml_branch_coverage=1 00:17:49.486 --rc genhtml_function_coverage=1 00:17:49.486 --rc genhtml_legend=1 00:17:49.486 --rc geninfo_all_blocks=1 00:17:49.486 --rc geninfo_unexecuted_blocks=1 00:17:49.486 00:17:49.486 ' 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:49.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.486 --rc genhtml_branch_coverage=1 00:17:49.486 --rc genhtml_function_coverage=1 00:17:49.486 --rc genhtml_legend=1 00:17:49.486 --rc geninfo_all_blocks=1 00:17:49.486 --rc geninfo_unexecuted_blocks=1 00:17:49.486 00:17:49.486 ' 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:49.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.486 --rc genhtml_branch_coverage=1 00:17:49.486 --rc genhtml_function_coverage=1 00:17:49.486 --rc genhtml_legend=1 00:17:49.486 --rc geninfo_all_blocks=1 00:17:49.486 --rc geninfo_unexecuted_blocks=1 00:17:49.486 00:17:49.486 ' 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:49.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.486 --rc genhtml_branch_coverage=1 00:17:49.486 --rc genhtml_function_coverage=1 00:17:49.486 --rc genhtml_legend=1 00:17:49.486 --rc geninfo_all_blocks=1 00:17:49.486 --rc geninfo_unexecuted_blocks=1 00:17:49.486 00:17:49.486 ' 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:49.486 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:49.745 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:49.745 18:05:57 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.988 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:57.989 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:57.989 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:57.989 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:57.989 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:57.989 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:57.989 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:57.989 altname enp217s0f0np0 00:17:57.989 altname ens818f0np0 00:17:57.989 inet 192.168.100.8/24 scope global mlx_0_0 00:17:57.989 valid_lft forever preferred_lft forever 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:57.989 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:57.989 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:57.989 altname enp217s0f1np1 00:17:57.989 altname ens818f1np1 00:17:57.989 inet 192.168.100.9/24 scope global mlx_0_1 00:17:57.989 valid_lft forever preferred_lft forever 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:57.989 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:57.990 192.168.100.9' 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:57.990 192.168.100.9' 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:57.990 192.168.100.9' 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2373379 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2373379 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2373379 ']' 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.990 18:06:04 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:57.990 [2024-12-09 18:06:04.793873] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:17:57.990 [2024-12-09 18:06:04.793924] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.990 [2024-12-09 18:06:04.881916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.990 [2024-12-09 18:06:04.921770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.990 [2024-12-09 18:06:04.921804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.990 [2024-12-09 18:06:04.921814] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.990 [2024-12-09 18:06:04.921822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.990 [2024-12-09 18:06:04.921829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.990 [2024-12-09 18:06:04.922396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2373607 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=6905f8b0-2fb6-49f1-9351-84a1e92102a0 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=e04710f1-3989-458a-9ad4-1ce8244d7bd4 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=372886b5-c5a0-4d8e-b252-d44994a3796e 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:57.990 null0 00:17:57.990 null1 00:17:57.990 [2024-12-09 18:06:05.111877] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:17:57.990 [2024-12-09 18:06:05.111925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2373607 ] 00:17:57.990 null2 00:17:57.990 [2024-12-09 18:06:05.143021] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24ae0e0/0x24bea30) succeed. 00:17:57.990 [2024-12-09 18:06:05.151850] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24af590/0x253eac0) succeed. 00:17:57.990 [2024-12-09 18:06:05.201101] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:57.990 [2024-12-09 18:06:05.204339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2373607 /var/tmp/tgt2.sock 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2373607 ']' 00:17:57.990 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:17:57.991 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.991 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:17:57.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:17:57.991 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.991 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:57.991 [2024-12-09 18:06:05.244982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.991 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.991 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:57.991 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:17:57.991 [2024-12-09 18:06:05.801269] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16d1cb0/0x164c6a0) succeed. 00:17:57.991 [2024-12-09 18:06:05.812077] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18207c0/0x168dd40) succeed. 00:17:57.991 [2024-12-09 18:06:05.854041] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:17:57.991 nvme0n1 nvme0n2 00:17:57.991 nvme1n1 00:17:57.991 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:17:57.991 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:17:57.991 18:06:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 6905f8b0-2fb6-49f1-9351-84a1e92102a0 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6905f8b02fb649f1935184a1e92102a0 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6905F8B02FB649F1935184A1E92102A0 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 6905F8B02FB649F1935184A1E92102A0 == \6\9\0\5\F\8\B\0\2\F\B\6\4\9\F\1\9\3\5\1\8\4\A\1\E\9\2\1\0\2\A\0 ]] 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid e04710f1-3989-458a-9ad4-1ce8244d7bd4 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e04710f13989458a9ad41ce8244d7bd4 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E04710F13989458A9AD41CE8244D7BD4 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ E04710F13989458A9AD41CE8244D7BD4 == \E\0\4\7\1\0\F\1\3\9\8\9\4\5\8\A\9\A\D\4\1\C\E\8\2\4\4\D\7\B\D\4 ]] 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 372886b5-c5a0-4d8e-b252-d44994a3796e 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=372886b5c5a04d8eb252d44994a3796e 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 372886B5C5A04D8EB252D44994A3796E 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 372886B5C5A04D8EB252D44994A3796E == \3\7\2\8\8\6\B\5\C\5\A\0\4\D\8\E\B\2\5\2\D\4\4\9\9\4\A\3\7\9\6\E ]] 00:18:06.147 18:06:12 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2373607 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2373607 ']' 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2373607 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2373607 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2373607' 00:18:12.710 killing process with pid 2373607 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2373607 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2373607 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:12.710 rmmod nvme_rdma 00:18:12.710 rmmod nvme_fabrics 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2373379 ']' 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2373379 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2373379 ']' 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2373379 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2373379 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2373379' 00:18:12.710 killing process with pid 2373379 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2373379 00:18:12.710 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2373379 00:18:12.970 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:12.970 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:12.970 00:18:12.970 real 0m23.547s 00:18:12.970 user 0m33.309s 00:18:12.970 sys 0m6.812s 00:18:12.970 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.970 18:06:20 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:12.970 ************************************ 00:18:12.970 END TEST nvmf_nsid 00:18:12.970 ************************************ 00:18:12.970 18:06:20 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:18:12.970 00:18:12.970 real 8m4.286s 00:18:12.970 user 18m51.131s 00:18:12.970 sys 2m24.330s 00:18:12.970 18:06:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.970 18:06:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:12.970 ************************************ 00:18:12.970 END TEST nvmf_target_extra 00:18:12.970 ************************************ 00:18:12.970 18:06:20 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:18:12.970 18:06:20 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:12.970 18:06:20 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:12.970 18:06:20 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:12.970 ************************************ 00:18:12.970 START TEST nvmf_host 00:18:12.970 ************************************ 00:18:12.970 18:06:20 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:18:13.230 * Looking for test storage... 00:18:13.230 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:13.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.230 --rc genhtml_branch_coverage=1 00:18:13.230 --rc genhtml_function_coverage=1 00:18:13.230 --rc genhtml_legend=1 00:18:13.230 --rc geninfo_all_blocks=1 00:18:13.230 --rc geninfo_unexecuted_blocks=1 00:18:13.230 00:18:13.230 ' 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:13.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.230 --rc genhtml_branch_coverage=1 00:18:13.230 --rc genhtml_function_coverage=1 00:18:13.230 --rc genhtml_legend=1 00:18:13.230 --rc geninfo_all_blocks=1 00:18:13.230 --rc geninfo_unexecuted_blocks=1 00:18:13.230 00:18:13.230 ' 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:13.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.230 --rc genhtml_branch_coverage=1 00:18:13.230 --rc genhtml_function_coverage=1 00:18:13.230 --rc genhtml_legend=1 00:18:13.230 --rc geninfo_all_blocks=1 00:18:13.230 --rc geninfo_unexecuted_blocks=1 00:18:13.230 00:18:13.230 ' 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:13.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.230 --rc genhtml_branch_coverage=1 00:18:13.230 --rc genhtml_function_coverage=1 00:18:13.230 --rc genhtml_legend=1 00:18:13.230 --rc geninfo_all_blocks=1 00:18:13.230 --rc geninfo_unexecuted_blocks=1 00:18:13.230 00:18:13.230 ' 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.230 18:06:21 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:13.231 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.231 ************************************ 00:18:13.231 START TEST nvmf_multicontroller 00:18:13.231 ************************************ 00:18:13.231 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:18:13.493 * Looking for test storage... 00:18:13.493 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:18:13.493 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:13.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.494 --rc genhtml_branch_coverage=1 00:18:13.494 --rc genhtml_function_coverage=1 00:18:13.494 --rc genhtml_legend=1 00:18:13.494 --rc geninfo_all_blocks=1 00:18:13.494 --rc geninfo_unexecuted_blocks=1 00:18:13.494 00:18:13.494 ' 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:13.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.494 --rc genhtml_branch_coverage=1 00:18:13.494 --rc genhtml_function_coverage=1 00:18:13.494 --rc genhtml_legend=1 00:18:13.494 --rc geninfo_all_blocks=1 00:18:13.494 --rc geninfo_unexecuted_blocks=1 00:18:13.494 00:18:13.494 ' 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:13.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.494 --rc genhtml_branch_coverage=1 00:18:13.494 --rc genhtml_function_coverage=1 00:18:13.494 --rc genhtml_legend=1 00:18:13.494 --rc geninfo_all_blocks=1 00:18:13.494 --rc geninfo_unexecuted_blocks=1 00:18:13.494 00:18:13.494 ' 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:13.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.494 --rc genhtml_branch_coverage=1 00:18:13.494 --rc genhtml_function_coverage=1 00:18:13.494 --rc genhtml_legend=1 00:18:13.494 --rc geninfo_all_blocks=1 00:18:13.494 --rc geninfo_unexecuted_blocks=1 00:18:13.494 00:18:13.494 ' 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.494 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:13.495 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:18:13.495 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:18:13.495 00:18:13.495 real 0m0.233s 00:18:13.495 user 0m0.129s 00:18:13.495 sys 0m0.122s 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:18:13.495 ************************************ 00:18:13.495 END TEST nvmf_multicontroller 00:18:13.495 ************************************ 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.495 18:06:21 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.756 ************************************ 00:18:13.756 START TEST nvmf_aer 00:18:13.756 ************************************ 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:18:13.756 * Looking for test storage... 00:18:13.756 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:13.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.756 --rc genhtml_branch_coverage=1 00:18:13.756 --rc genhtml_function_coverage=1 00:18:13.756 --rc genhtml_legend=1 00:18:13.756 --rc geninfo_all_blocks=1 00:18:13.756 --rc geninfo_unexecuted_blocks=1 00:18:13.756 00:18:13.756 ' 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:13.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.756 --rc genhtml_branch_coverage=1 00:18:13.756 --rc genhtml_function_coverage=1 00:18:13.756 --rc genhtml_legend=1 00:18:13.756 --rc geninfo_all_blocks=1 00:18:13.756 --rc geninfo_unexecuted_blocks=1 00:18:13.756 00:18:13.756 ' 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:13.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.756 --rc genhtml_branch_coverage=1 00:18:13.756 --rc genhtml_function_coverage=1 00:18:13.756 --rc genhtml_legend=1 00:18:13.756 --rc geninfo_all_blocks=1 00:18:13.756 --rc geninfo_unexecuted_blocks=1 00:18:13.756 00:18:13.756 ' 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:13.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.756 --rc genhtml_branch_coverage=1 00:18:13.756 --rc genhtml_function_coverage=1 00:18:13.756 --rc genhtml_legend=1 00:18:13.756 --rc geninfo_all_blocks=1 00:18:13.756 --rc geninfo_unexecuted_blocks=1 00:18:13.756 00:18:13.756 ' 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.756 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:13.757 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:13.757 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:14.017 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:18:14.017 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:14.017 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.017 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:14.017 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:14.017 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:14.017 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.017 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.017 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.017 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:14.017 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:14.017 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:18:14.017 18:06:21 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:20.591 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:20.591 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:18:20.591 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:20.591 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:20.591 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:20.591 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:20.591 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:20.591 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:18:20.591 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:20.591 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:18:20.591 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:18:20.591 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:18:20.591 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:18:20.591 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:18:20.591 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:18:20.591 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:20.591 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:20.591 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:20.851 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:20.851 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:20.851 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:20.852 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:20.852 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:20.852 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:20.852 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:20.852 altname enp217s0f0np0 00:18:20.852 altname ens818f0np0 00:18:20.852 inet 192.168.100.8/24 scope global mlx_0_0 00:18:20.852 valid_lft forever preferred_lft forever 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:20.852 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:20.852 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:20.852 altname enp217s0f1np1 00:18:20.852 altname ens818f1np1 00:18:20.852 inet 192.168.100.9/24 scope global mlx_0_1 00:18:20.852 valid_lft forever preferred_lft forever 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:20.852 192.168.100.9' 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:20.852 192.168.100.9' 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:20.852 192.168.100.9' 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:20.852 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:21.111 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:18:21.111 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:21.111 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:21.111 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:21.111 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2379977 00:18:21.111 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2379977 00:18:21.111 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:21.111 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2379977 ']' 00:18:21.111 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.111 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.111 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.111 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.111 18:06:28 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:21.111 [2024-12-09 18:06:28.891703] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:18:21.111 [2024-12-09 18:06:28.891762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.111 [2024-12-09 18:06:28.969359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:21.111 [2024-12-09 18:06:29.011694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.111 [2024-12-09 18:06:29.011731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.111 [2024-12-09 18:06:29.011741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.111 [2024-12-09 18:06:29.011749] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.111 [2024-12-09 18:06:29.011756] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.111 [2024-12-09 18:06:29.015965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.111 [2024-12-09 18:06:29.016008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.111 [2024-12-09 18:06:29.016117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.111 [2024-12-09 18:06:29.016119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:21.369 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.369 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:18:21.369 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:21.369 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:21.369 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:21.369 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.369 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:21.369 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.369 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:21.369 [2024-12-09 18:06:29.194165] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x848980/0x84ce70) succeed. 00:18:21.369 [2024-12-09 18:06:29.203464] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x84a010/0x88e510) succeed. 00:18:21.369 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.369 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:18:21.369 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.369 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:21.627 Malloc0 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:21.627 [2024-12-09 18:06:29.382215] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:21.627 [ 00:18:21.627 { 00:18:21.627 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:21.627 "subtype": "Discovery", 00:18:21.627 "listen_addresses": [], 00:18:21.627 "allow_any_host": true, 00:18:21.627 "hosts": [] 00:18:21.627 }, 00:18:21.627 { 00:18:21.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.627 "subtype": "NVMe", 00:18:21.627 "listen_addresses": [ 00:18:21.627 { 00:18:21.627 "trtype": "RDMA", 00:18:21.627 "adrfam": "IPv4", 00:18:21.627 "traddr": "192.168.100.8", 00:18:21.627 "trsvcid": "4420" 00:18:21.627 } 00:18:21.627 ], 00:18:21.627 "allow_any_host": true, 00:18:21.627 "hosts": [], 00:18:21.627 "serial_number": "SPDK00000000000001", 00:18:21.627 "model_number": "SPDK bdev Controller", 00:18:21.627 "max_namespaces": 2, 00:18:21.627 "min_cntlid": 1, 00:18:21.627 "max_cntlid": 65519, 00:18:21.627 "namespaces": [ 00:18:21.627 { 00:18:21.627 "nsid": 1, 00:18:21.627 "bdev_name": "Malloc0", 00:18:21.627 "name": "Malloc0", 00:18:21.627 "nguid": "A1424C2E67874A2FAA193AA1DBC6DD74", 00:18:21.627 "uuid": "a1424c2e-6787-4a2f-aa19-3aa1dbc6dd74" 00:18:21.627 } 00:18:21.627 ] 00:18:21.627 } 00:18:21.627 ] 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2380036 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:18:21.627 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:21.885 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:21.885 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:21.885 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:18:21.885 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:18:21.885 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.885 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:21.885 Malloc1 00:18:21.885 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.885 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:18:21.885 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.885 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:21.885 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.885 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:18:21.885 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.885 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:21.885 [ 00:18:21.885 { 00:18:21.885 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:21.885 "subtype": "Discovery", 00:18:21.885 "listen_addresses": [], 00:18:21.885 "allow_any_host": true, 00:18:21.885 "hosts": [] 00:18:21.885 }, 00:18:21.885 { 00:18:21.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.885 "subtype": "NVMe", 00:18:21.885 "listen_addresses": [ 00:18:21.885 { 00:18:21.885 "trtype": "RDMA", 00:18:21.885 "adrfam": "IPv4", 00:18:21.886 "traddr": "192.168.100.8", 00:18:21.886 "trsvcid": "4420" 00:18:21.886 } 00:18:21.886 ], 00:18:21.886 "allow_any_host": true, 00:18:21.886 "hosts": [], 00:18:21.886 "serial_number": "SPDK00000000000001", 00:18:21.886 "model_number": "SPDK bdev Controller", 00:18:21.886 "max_namespaces": 2, 00:18:21.886 "min_cntlid": 1, 00:18:21.886 "max_cntlid": 65519, 00:18:21.886 "namespaces": [ 00:18:21.886 { 00:18:21.886 "nsid": 1, 00:18:21.886 "bdev_name": "Malloc0", 00:18:21.886 "name": "Malloc0", 00:18:21.886 "nguid": "A1424C2E67874A2FAA193AA1DBC6DD74", 00:18:21.886 "uuid": "a1424c2e-6787-4a2f-aa19-3aa1dbc6dd74" 00:18:21.886 }, 00:18:21.886 { 00:18:21.886 "nsid": 2, 00:18:21.886 "bdev_name": "Malloc1", 00:18:21.886 "name": "Malloc1", 00:18:21.886 "nguid": "2677BB3D3ADC4B8F8BF479C131476E2E", 00:18:21.886 "uuid": "2677bb3d-3adc-4b8f-8bf4-79c131476e2e" 00:18:21.886 } 00:18:21.886 ] 00:18:21.886 } 00:18:21.886 ] 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2380036 00:18:21.886 Asynchronous Event Request test 00:18:21.886 Attaching to 192.168.100.8 00:18:21.886 Attached to 192.168.100.8 00:18:21.886 Registering asynchronous event callbacks... 00:18:21.886 Starting namespace attribute notice tests for all controllers... 00:18:21.886 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:21.886 aer_cb - Changed Namespace 00:18:21.886 Cleaning up... 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:21.886 rmmod nvme_rdma 00:18:21.886 rmmod nvme_fabrics 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2379977 ']' 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2379977 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2379977 ']' 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2379977 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.886 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2379977 00:18:22.144 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:22.144 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:22.144 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2379977' 00:18:22.144 killing process with pid 2379977 00:18:22.144 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2379977 00:18:22.144 18:06:29 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2379977 00:18:22.403 18:06:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:22.403 18:06:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:22.403 00:18:22.403 real 0m8.626s 00:18:22.403 user 0m6.335s 00:18:22.403 sys 0m6.037s 00:18:22.403 18:06:30 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.403 18:06:30 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:18:22.403 ************************************ 00:18:22.403 END TEST nvmf_aer 00:18:22.403 ************************************ 00:18:22.403 18:06:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:18:22.403 18:06:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:22.403 18:06:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.403 18:06:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.403 ************************************ 00:18:22.403 START TEST nvmf_async_init 00:18:22.403 ************************************ 00:18:22.403 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:18:22.403 * Looking for test storage... 00:18:22.403 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:22.403 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:22.403 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:18:22.403 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:22.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.663 --rc genhtml_branch_coverage=1 00:18:22.663 --rc genhtml_function_coverage=1 00:18:22.663 --rc genhtml_legend=1 00:18:22.663 --rc geninfo_all_blocks=1 00:18:22.663 --rc geninfo_unexecuted_blocks=1 00:18:22.663 00:18:22.663 ' 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:22.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.663 --rc genhtml_branch_coverage=1 00:18:22.663 --rc genhtml_function_coverage=1 00:18:22.663 --rc genhtml_legend=1 00:18:22.663 --rc geninfo_all_blocks=1 00:18:22.663 --rc geninfo_unexecuted_blocks=1 00:18:22.663 00:18:22.663 ' 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:22.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.663 --rc genhtml_branch_coverage=1 00:18:22.663 --rc genhtml_function_coverage=1 00:18:22.663 --rc genhtml_legend=1 00:18:22.663 --rc geninfo_all_blocks=1 00:18:22.663 --rc geninfo_unexecuted_blocks=1 00:18:22.663 00:18:22.663 ' 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:22.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.663 --rc genhtml_branch_coverage=1 00:18:22.663 --rc genhtml_function_coverage=1 00:18:22.663 --rc genhtml_legend=1 00:18:22.663 --rc geninfo_all_blocks=1 00:18:22.663 --rc geninfo_unexecuted_blocks=1 00:18:22.663 00:18:22.663 ' 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:22.663 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:22.663 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=24b4d44acaba4d1388830b70c51ef254 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:18:22.664 18:06:30 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.787 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:30.787 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:18:30.787 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:30.787 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:30.787 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:30.787 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:30.787 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:30.787 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:18:30.787 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:30.787 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:18:30.787 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:18:30.787 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:18:30.787 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:30.788 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:30.788 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:30.788 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:30.788 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:30.788 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:30.788 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:30.788 altname enp217s0f0np0 00:18:30.788 altname ens818f0np0 00:18:30.788 inet 192.168.100.8/24 scope global mlx_0_0 00:18:30.788 valid_lft forever preferred_lft forever 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:30.788 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:30.788 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:30.788 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:30.788 altname enp217s0f1np1 00:18:30.788 altname ens818f1np1 00:18:30.788 inet 192.168.100.9/24 scope global mlx_0_1 00:18:30.789 valid_lft forever preferred_lft forever 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:30.789 192.168.100.9' 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:30.789 192.168.100.9' 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:30.789 192.168.100.9' 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2383475 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2383475 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2383475 ']' 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.789 [2024-12-09 18:06:37.734275] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:18:30.789 [2024-12-09 18:06:37.734325] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.789 [2024-12-09 18:06:37.822300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.789 [2024-12-09 18:06:37.860787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.789 [2024-12-09 18:06:37.860824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.789 [2024-12-09 18:06:37.860836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.789 [2024-12-09 18:06:37.860844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.789 [2024-12-09 18:06:37.860851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.789 [2024-12-09 18:06:37.861484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.789 18:06:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.789 [2024-12-09 18:06:38.034618] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18126a0/0x1816b90) succeed. 00:18:30.789 [2024-12-09 18:06:38.044628] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1813b50/0x1858230) succeed. 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.789 null0 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 24b4d44acaba4d1388830b70c51ef254 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.789 [2024-12-09 18:06:38.123429] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.789 nvme0n1 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.789 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.789 [ 00:18:30.790 { 00:18:30.790 "name": "nvme0n1", 00:18:30.790 "aliases": [ 00:18:30.790 "24b4d44a-caba-4d13-8883-0b70c51ef254" 00:18:30.790 ], 00:18:30.790 "product_name": "NVMe disk", 00:18:30.790 "block_size": 512, 00:18:30.790 "num_blocks": 2097152, 00:18:30.790 "uuid": "24b4d44a-caba-4d13-8883-0b70c51ef254", 00:18:30.790 "numa_id": 1, 00:18:30.790 "assigned_rate_limits": { 00:18:30.790 "rw_ios_per_sec": 0, 00:18:30.790 "rw_mbytes_per_sec": 0, 00:18:30.790 "r_mbytes_per_sec": 0, 00:18:30.790 "w_mbytes_per_sec": 0 00:18:30.790 }, 00:18:30.790 "claimed": false, 00:18:30.790 "zoned": false, 00:18:30.790 "supported_io_types": { 00:18:30.790 "read": true, 00:18:30.790 "write": true, 00:18:30.790 "unmap": false, 00:18:30.790 "flush": true, 00:18:30.790 "reset": true, 00:18:30.790 "nvme_admin": true, 00:18:30.790 "nvme_io": true, 00:18:30.790 "nvme_io_md": false, 00:18:30.790 "write_zeroes": true, 00:18:30.790 "zcopy": false, 00:18:30.790 "get_zone_info": false, 00:18:30.790 "zone_management": false, 00:18:30.790 "zone_append": false, 00:18:30.790 "compare": true, 00:18:30.790 "compare_and_write": true, 00:18:30.790 "abort": true, 00:18:30.790 "seek_hole": false, 00:18:30.790 "seek_data": false, 00:18:30.790 "copy": true, 00:18:30.790 "nvme_iov_md": false 00:18:30.790 }, 00:18:30.790 "memory_domains": [ 00:18:30.790 { 00:18:30.790 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:30.790 "dma_device_type": 0 00:18:30.790 } 00:18:30.790 ], 00:18:30.790 "driver_specific": { 00:18:30.790 "nvme": [ 00:18:30.790 { 00:18:30.790 "trid": { 00:18:30.790 "trtype": "RDMA", 00:18:30.790 "adrfam": "IPv4", 00:18:30.790 "traddr": "192.168.100.8", 00:18:30.790 "trsvcid": "4420", 00:18:30.790 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:30.790 }, 00:18:30.790 "ctrlr_data": { 00:18:30.790 "cntlid": 1, 00:18:30.790 "vendor_id": "0x8086", 00:18:30.790 "model_number": "SPDK bdev Controller", 00:18:30.790 "serial_number": "00000000000000000000", 00:18:30.790 "firmware_revision": "25.01", 00:18:30.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:30.790 "oacs": { 00:18:30.790 "security": 0, 00:18:30.790 "format": 0, 00:18:30.790 "firmware": 0, 00:18:30.790 "ns_manage": 0 00:18:30.790 }, 00:18:30.790 "multi_ctrlr": true, 00:18:30.790 "ana_reporting": false 00:18:30.790 }, 00:18:30.790 "vs": { 00:18:30.790 "nvme_version": "1.3" 00:18:30.790 }, 00:18:30.790 "ns_data": { 00:18:30.790 "id": 1, 00:18:30.790 "can_share": true 00:18:30.790 } 00:18:30.790 } 00:18:30.790 ], 00:18:30.790 "mp_policy": "active_passive" 00:18:30.790 } 00:18:30.790 } 00:18:30.790 ] 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.790 [2024-12-09 18:06:38.236790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:30.790 [2024-12-09 18:06:38.254113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:30.790 [2024-12-09 18:06:38.283257] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.790 [ 00:18:30.790 { 00:18:30.790 "name": "nvme0n1", 00:18:30.790 "aliases": [ 00:18:30.790 "24b4d44a-caba-4d13-8883-0b70c51ef254" 00:18:30.790 ], 00:18:30.790 "product_name": "NVMe disk", 00:18:30.790 "block_size": 512, 00:18:30.790 "num_blocks": 2097152, 00:18:30.790 "uuid": "24b4d44a-caba-4d13-8883-0b70c51ef254", 00:18:30.790 "numa_id": 1, 00:18:30.790 "assigned_rate_limits": { 00:18:30.790 "rw_ios_per_sec": 0, 00:18:30.790 "rw_mbytes_per_sec": 0, 00:18:30.790 "r_mbytes_per_sec": 0, 00:18:30.790 "w_mbytes_per_sec": 0 00:18:30.790 }, 00:18:30.790 "claimed": false, 00:18:30.790 "zoned": false, 00:18:30.790 "supported_io_types": { 00:18:30.790 "read": true, 00:18:30.790 "write": true, 00:18:30.790 "unmap": false, 00:18:30.790 "flush": true, 00:18:30.790 "reset": true, 00:18:30.790 "nvme_admin": true, 00:18:30.790 "nvme_io": true, 00:18:30.790 "nvme_io_md": false, 00:18:30.790 "write_zeroes": true, 00:18:30.790 "zcopy": false, 00:18:30.790 "get_zone_info": false, 00:18:30.790 "zone_management": false, 00:18:30.790 "zone_append": false, 00:18:30.790 "compare": true, 00:18:30.790 "compare_and_write": true, 00:18:30.790 "abort": true, 00:18:30.790 "seek_hole": false, 00:18:30.790 "seek_data": false, 00:18:30.790 "copy": true, 00:18:30.790 "nvme_iov_md": false 00:18:30.790 }, 00:18:30.790 "memory_domains": [ 00:18:30.790 { 00:18:30.790 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:30.790 "dma_device_type": 0 00:18:30.790 } 00:18:30.790 ], 00:18:30.790 "driver_specific": { 00:18:30.790 "nvme": [ 00:18:30.790 { 00:18:30.790 "trid": { 00:18:30.790 "trtype": "RDMA", 00:18:30.790 "adrfam": "IPv4", 00:18:30.790 "traddr": "192.168.100.8", 00:18:30.790 "trsvcid": "4420", 00:18:30.790 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:30.790 }, 00:18:30.790 "ctrlr_data": { 00:18:30.790 "cntlid": 2, 00:18:30.790 "vendor_id": "0x8086", 00:18:30.790 "model_number": "SPDK bdev Controller", 00:18:30.790 "serial_number": "00000000000000000000", 00:18:30.790 "firmware_revision": "25.01", 00:18:30.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:30.790 "oacs": { 00:18:30.790 "security": 0, 00:18:30.790 "format": 0, 00:18:30.790 "firmware": 0, 00:18:30.790 "ns_manage": 0 00:18:30.790 }, 00:18:30.790 "multi_ctrlr": true, 00:18:30.790 "ana_reporting": false 00:18:30.790 }, 00:18:30.790 "vs": { 00:18:30.790 "nvme_version": "1.3" 00:18:30.790 }, 00:18:30.790 "ns_data": { 00:18:30.790 "id": 1, 00:18:30.790 "can_share": true 00:18:30.790 } 00:18:30.790 } 00:18:30.790 ], 00:18:30.790 "mp_policy": "active_passive" 00:18:30.790 } 00:18:30.790 } 00:18:30.790 ] 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.n76EQBb64d 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.n76EQBb64d 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.n76EQBb64d 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.790 [2024-12-09 18:06:38.378112] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.790 [2024-12-09 18:06:38.398165] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.790 nvme0n1 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.790 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.790 [ 00:18:30.790 { 00:18:30.790 "name": "nvme0n1", 00:18:30.790 "aliases": [ 00:18:30.791 "24b4d44a-caba-4d13-8883-0b70c51ef254" 00:18:30.791 ], 00:18:30.791 "product_name": "NVMe disk", 00:18:30.791 "block_size": 512, 00:18:30.791 "num_blocks": 2097152, 00:18:30.791 "uuid": "24b4d44a-caba-4d13-8883-0b70c51ef254", 00:18:30.791 "numa_id": 1, 00:18:30.791 "assigned_rate_limits": { 00:18:30.791 "rw_ios_per_sec": 0, 00:18:30.791 "rw_mbytes_per_sec": 0, 00:18:30.791 "r_mbytes_per_sec": 0, 00:18:30.791 "w_mbytes_per_sec": 0 00:18:30.791 }, 00:18:30.791 "claimed": false, 00:18:30.791 "zoned": false, 00:18:30.791 "supported_io_types": { 00:18:30.791 "read": true, 00:18:30.791 "write": true, 00:18:30.791 "unmap": false, 00:18:30.791 "flush": true, 00:18:30.791 "reset": true, 00:18:30.791 "nvme_admin": true, 00:18:30.791 "nvme_io": true, 00:18:30.791 "nvme_io_md": false, 00:18:30.791 "write_zeroes": true, 00:18:30.791 "zcopy": false, 00:18:30.791 "get_zone_info": false, 00:18:30.791 "zone_management": false, 00:18:30.791 "zone_append": false, 00:18:30.791 "compare": true, 00:18:30.791 "compare_and_write": true, 00:18:30.791 "abort": true, 00:18:30.791 "seek_hole": false, 00:18:30.791 "seek_data": false, 00:18:30.791 "copy": true, 00:18:30.791 "nvme_iov_md": false 00:18:30.791 }, 00:18:30.791 "memory_domains": [ 00:18:30.791 { 00:18:30.791 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:30.791 "dma_device_type": 0 00:18:30.791 } 00:18:30.791 ], 00:18:30.791 "driver_specific": { 00:18:30.791 "nvme": [ 00:18:30.791 { 00:18:30.791 "trid": { 00:18:30.791 "trtype": "RDMA", 00:18:30.791 "adrfam": "IPv4", 00:18:30.791 "traddr": "192.168.100.8", 00:18:30.791 "trsvcid": "4421", 00:18:30.791 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:30.791 }, 00:18:30.791 "ctrlr_data": { 00:18:30.791 "cntlid": 3, 00:18:30.791 "vendor_id": "0x8086", 00:18:30.791 "model_number": "SPDK bdev Controller", 00:18:30.791 "serial_number": "00000000000000000000", 00:18:30.791 "firmware_revision": "25.01", 00:18:30.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:30.791 "oacs": { 00:18:30.791 "security": 0, 00:18:30.791 "format": 0, 00:18:30.791 "firmware": 0, 00:18:30.791 "ns_manage": 0 00:18:30.791 }, 00:18:30.791 "multi_ctrlr": true, 00:18:30.791 "ana_reporting": false 00:18:30.791 }, 00:18:30.791 "vs": { 00:18:30.791 "nvme_version": "1.3" 00:18:30.791 }, 00:18:30.791 "ns_data": { 00:18:30.791 "id": 1, 00:18:30.791 "can_share": true 00:18:30.791 } 00:18:30.791 } 00:18:30.791 ], 00:18:30.791 "mp_policy": "active_passive" 00:18:30.791 } 00:18:30.791 } 00:18:30.791 ] 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.n76EQBb64d 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:30.791 rmmod nvme_rdma 00:18:30.791 rmmod nvme_fabrics 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2383475 ']' 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2383475 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2383475 ']' 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2383475 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2383475 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2383475' 00:18:30.791 killing process with pid 2383475 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2383475 00:18:30.791 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2383475 00:18:31.050 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:31.050 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:31.050 00:18:31.050 real 0m8.626s 00:18:31.050 user 0m3.322s 00:18:31.050 sys 0m5.934s 00:18:31.050 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.050 18:06:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:18:31.050 ************************************ 00:18:31.050 END TEST nvmf_async_init 00:18:31.050 ************************************ 00:18:31.051 18:06:38 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:18:31.051 18:06:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:31.051 18:06:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.051 18:06:38 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.051 ************************************ 00:18:31.051 START TEST dma 00:18:31.051 ************************************ 00:18:31.051 18:06:38 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:18:31.051 * Looking for test storage... 00:18:31.051 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:31.051 18:06:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:31.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.311 --rc genhtml_branch_coverage=1 00:18:31.311 --rc genhtml_function_coverage=1 00:18:31.311 --rc genhtml_legend=1 00:18:31.311 --rc geninfo_all_blocks=1 00:18:31.311 --rc geninfo_unexecuted_blocks=1 00:18:31.311 00:18:31.311 ' 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:31.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.311 --rc genhtml_branch_coverage=1 00:18:31.311 --rc genhtml_function_coverage=1 00:18:31.311 --rc genhtml_legend=1 00:18:31.311 --rc geninfo_all_blocks=1 00:18:31.311 --rc geninfo_unexecuted_blocks=1 00:18:31.311 00:18:31.311 ' 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:31.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.311 --rc genhtml_branch_coverage=1 00:18:31.311 --rc genhtml_function_coverage=1 00:18:31.311 --rc genhtml_legend=1 00:18:31.311 --rc geninfo_all_blocks=1 00:18:31.311 --rc geninfo_unexecuted_blocks=1 00:18:31.311 00:18:31.311 ' 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:31.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.311 --rc genhtml_branch_coverage=1 00:18:31.311 --rc genhtml_function_coverage=1 00:18:31.311 --rc genhtml_legend=1 00:18:31.311 --rc geninfo_all_blocks=1 00:18:31.311 --rc geninfo_unexecuted_blocks=1 00:18:31.311 00:18:31.311 ' 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.311 18:06:39 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:31.312 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:18:31.312 18:06:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:39.437 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:39.437 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:39.437 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:39.437 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:39.437 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:39.437 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:39.438 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:39.438 altname enp217s0f0np0 00:18:39.438 altname ens818f0np0 00:18:39.438 inet 192.168.100.8/24 scope global mlx_0_0 00:18:39.438 valid_lft forever preferred_lft forever 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:39.438 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:39.438 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:39.438 altname enp217s0f1np1 00:18:39.438 altname ens818f1np1 00:18:39.438 inet 192.168.100.9/24 scope global mlx_0_1 00:18:39.438 valid_lft forever preferred_lft forever 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:39.438 192.168.100.9' 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:39.438 192.168.100.9' 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:39.438 192.168.100.9' 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=2387030 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 2387030 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # '[' -z 2387030 ']' 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.438 18:06:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:39.438 [2024-12-09 18:06:46.408929] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:18:39.438 [2024-12-09 18:06:46.408997] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.438 [2024-12-09 18:06:46.500343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:39.438 [2024-12-09 18:06:46.539406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.438 [2024-12-09 18:06:46.539445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.438 [2024-12-09 18:06:46.539454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.438 [2024-12-09 18:06:46.539462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.438 [2024-12-09 18:06:46.539469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.438 [2024-12-09 18:06:46.540721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.438 [2024-12-09 18:06:46.540721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.438 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.438 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@868 -- # return 0 00:18:39.438 18:06:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:39.438 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:39.438 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:39.438 18:06:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.438 18:06:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:18:39.438 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.438 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:39.438 [2024-12-09 18:06:47.308859] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1888200/0x188c6f0) succeed. 00:18:39.438 [2024-12-09 18:06:47.317761] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1889750/0x18cdd90) succeed. 00:18:39.438 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.438 18:06:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:18:39.438 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.438 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:39.698 Malloc0 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:39.698 [2024-12-09 18:06:47.461408] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:39.698 { 00:18:39.698 "params": { 00:18:39.698 "name": "Nvme$subsystem", 00:18:39.698 "trtype": "$TEST_TRANSPORT", 00:18:39.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:39.698 "adrfam": "ipv4", 00:18:39.698 "trsvcid": "$NVMF_PORT", 00:18:39.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:39.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:39.698 "hdgst": ${hdgst:-false}, 00:18:39.698 "ddgst": ${ddgst:-false} 00:18:39.698 }, 00:18:39.698 "method": "bdev_nvme_attach_controller" 00:18:39.698 } 00:18:39.698 EOF 00:18:39.698 )") 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:18:39.698 18:06:47 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:39.698 "params": { 00:18:39.698 "name": "Nvme0", 00:18:39.698 "trtype": "rdma", 00:18:39.698 "traddr": "192.168.100.8", 00:18:39.698 "adrfam": "ipv4", 00:18:39.698 "trsvcid": "4420", 00:18:39.698 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:39.698 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:39.698 "hdgst": false, 00:18:39.698 "ddgst": false 00:18:39.698 }, 00:18:39.698 "method": "bdev_nvme_attach_controller" 00:18:39.698 }' 00:18:39.698 [2024-12-09 18:06:47.513597] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:18:39.698 [2024-12-09 18:06:47.513651] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2387217 ] 00:18:39.698 [2024-12-09 18:06:47.603390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:39.698 [2024-12-09 18:06:47.644059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.698 [2024-12-09 18:06:47.644061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.267 bdev Nvme0n1 reports 1 memory domains 00:18:46.267 bdev Nvme0n1 supports RDMA memory domain 00:18:46.267 Initialization complete, running randrw IO for 5 sec on 2 cores 00:18:46.267 ========================================================================== 00:18:46.267 Latency [us] 00:18:46.267 IOPS MiB/s Average min max 00:18:46.267 Core 2: 21621.42 84.46 739.38 247.55 8710.08 00:18:46.267 Core 3: 21604.02 84.39 739.94 245.50 8800.48 00:18:46.267 ========================================================================== 00:18:46.267 Total : 43225.43 168.85 739.66 245.50 8800.48 00:18:46.267 00:18:46.267 Total operations: 216149, translate 216149 pull_push 0 memzero 0 00:18:46.267 18:06:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:18:46.267 18:06:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:18:46.267 18:06:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:18:46.267 [2024-12-09 18:06:53.064744] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:18:46.267 [2024-12-09 18:06:53.064803] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2388271 ] 00:18:46.267 [2024-12-09 18:06:53.155323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:46.267 [2024-12-09 18:06:53.192637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:46.267 [2024-12-09 18:06:53.192638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.541 bdev Malloc0 reports 2 memory domains 00:18:51.541 bdev Malloc0 doesn't support RDMA memory domain 00:18:51.541 Initialization complete, running randrw IO for 5 sec on 2 cores 00:18:51.541 ========================================================================== 00:18:51.541 Latency [us] 00:18:51.541 IOPS MiB/s Average min max 00:18:51.541 Core 2: 14072.37 54.97 1136.30 417.18 2024.24 00:18:51.541 Core 3: 14197.52 55.46 1126.25 479.80 2096.29 00:18:51.541 ========================================================================== 00:18:51.541 Total : 28269.89 110.43 1131.25 417.18 2096.29 00:18:51.541 00:18:51.541 Total operations: 141406, translate 0 pull_push 565624 memzero 0 00:18:51.541 18:06:58 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:18:51.541 18:06:58 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:18:51.541 18:06:58 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:18:51.541 18:06:58 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:18:51.541 Ignoring -M option 00:18:51.541 [2024-12-09 18:06:58.522780] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:18:51.541 [2024-12-09 18:06:58.522834] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2389075 ] 00:18:51.541 [2024-12-09 18:06:58.614244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:51.541 [2024-12-09 18:06:58.653869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:51.541 [2024-12-09 18:06:58.653872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.808 bdev 1fa76ef8-6197-49f3-a0fd-10e3298ca703 reports 1 memory domains 00:18:56.808 bdev 1fa76ef8-6197-49f3-a0fd-10e3298ca703 supports RDMA memory domain 00:18:56.808 Initialization complete, running randread IO for 5 sec on 2 cores 00:18:56.808 ========================================================================== 00:18:56.808 Latency [us] 00:18:56.808 IOPS MiB/s Average min max 00:18:56.808 Core 2: 74285.73 290.18 214.62 79.36 3443.05 00:18:56.808 Core 3: 72004.75 281.27 221.39 73.60 2021.46 00:18:56.808 ========================================================================== 00:18:56.808 Total : 146290.47 571.45 217.95 73.60 3443.05 00:18:56.808 00:18:56.808 Total operations: 731522, translate 0 pull_push 0 memzero 731522 00:18:56.808 18:07:04 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:18:56.808 [2024-12-09 18:07:04.203478] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:58.789 Initializing NVMe Controllers 00:18:58.789 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:18:58.789 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:18:58.789 Initialization complete. Launching workers. 00:18:58.789 ======================================================== 00:18:58.789 Latency(us) 00:18:58.789 Device Information : IOPS MiB/s Average min max 00:18:58.789 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7965.71 5993.78 9969.09 00:18:58.789 ======================================================== 00:18:58.789 Total : 2016.00 7.88 7965.71 5993.78 9969.09 00:18:58.789 00:18:58.789 18:07:06 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:18:58.789 18:07:06 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:18:58.789 18:07:06 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:18:58.789 18:07:06 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:18:58.789 [2024-12-09 18:07:06.573034] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:18:58.789 [2024-12-09 18:07:06.573089] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2390413 ] 00:18:58.789 [2024-12-09 18:07:06.661310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:58.789 [2024-12-09 18:07:06.701353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.789 [2024-12-09 18:07:06.701354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.349 bdev e4c92b1e-3ee5-429e-b24d-31ed39129631 reports 1 memory domains 00:19:05.349 bdev e4c92b1e-3ee5-429e-b24d-31ed39129631 supports RDMA memory domain 00:19:05.349 Initialization complete, running randrw IO for 5 sec on 2 cores 00:19:05.349 ========================================================================== 00:19:05.349 Latency [us] 00:19:05.349 IOPS MiB/s Average min max 00:19:05.349 Core 2: 18986.20 74.16 842.07 14.94 12631.38 00:19:05.349 Core 3: 19239.73 75.16 830.96 13.17 12263.28 00:19:05.349 ========================================================================== 00:19:05.349 Total : 38225.93 149.32 836.48 13.17 12631.38 00:19:05.349 00:19:05.349 Total operations: 191184, translate 191079 pull_push 0 memzero 105 00:19:05.349 18:07:12 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:19:05.349 18:07:12 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:19:05.349 18:07:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:05.349 18:07:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:19:05.349 18:07:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:05.349 18:07:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:05.349 18:07:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:19:05.349 18:07:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:05.349 18:07:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:05.349 rmmod nvme_rdma 00:19:05.349 rmmod nvme_fabrics 00:19:05.349 18:07:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:05.349 18:07:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:19:05.349 18:07:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:19:05.349 18:07:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 2387030 ']' 00:19:05.349 18:07:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 2387030 00:19:05.349 18:07:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # '[' -z 2387030 ']' 00:19:05.349 18:07:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # kill -0 2387030 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # uname 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2387030 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2387030' 00:19:05.350 killing process with pid 2387030 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@973 -- # kill 2387030 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@978 -- # wait 2387030 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:05.350 00:19:05.350 real 0m33.621s 00:19:05.350 user 1m36.800s 00:19:05.350 sys 0m6.842s 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:05.350 ************************************ 00:19:05.350 END TEST dma 00:19:05.350 ************************************ 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.350 ************************************ 00:19:05.350 START TEST nvmf_identify 00:19:05.350 ************************************ 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:19:05.350 * Looking for test storage... 00:19:05.350 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:05.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.350 --rc genhtml_branch_coverage=1 00:19:05.350 --rc genhtml_function_coverage=1 00:19:05.350 --rc genhtml_legend=1 00:19:05.350 --rc geninfo_all_blocks=1 00:19:05.350 --rc geninfo_unexecuted_blocks=1 00:19:05.350 00:19:05.350 ' 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:05.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.350 --rc genhtml_branch_coverage=1 00:19:05.350 --rc genhtml_function_coverage=1 00:19:05.350 --rc genhtml_legend=1 00:19:05.350 --rc geninfo_all_blocks=1 00:19:05.350 --rc geninfo_unexecuted_blocks=1 00:19:05.350 00:19:05.350 ' 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:05.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.350 --rc genhtml_branch_coverage=1 00:19:05.350 --rc genhtml_function_coverage=1 00:19:05.350 --rc genhtml_legend=1 00:19:05.350 --rc geninfo_all_blocks=1 00:19:05.350 --rc geninfo_unexecuted_blocks=1 00:19:05.350 00:19:05.350 ' 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:05.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.350 --rc genhtml_branch_coverage=1 00:19:05.350 --rc genhtml_function_coverage=1 00:19:05.350 --rc genhtml_legend=1 00:19:05.350 --rc geninfo_all_blocks=1 00:19:05.350 --rc geninfo_unexecuted_blocks=1 00:19:05.350 00:19:05.350 ' 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.350 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:05.351 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:19:05.351 18:07:12 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:11.924 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:11.924 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:11.924 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:11.924 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:11.924 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:12.184 18:07:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:12.184 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:12.184 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:12.184 altname enp217s0f0np0 00:19:12.184 altname ens818f0np0 00:19:12.184 inet 192.168.100.8/24 scope global mlx_0_0 00:19:12.184 valid_lft forever preferred_lft forever 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:12.184 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:12.184 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:12.184 altname enp217s0f1np1 00:19:12.184 altname ens818f1np1 00:19:12.184 inet 192.168.100.9/24 scope global mlx_0_1 00:19:12.184 valid_lft forever preferred_lft forever 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:12.184 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:12.185 192.168.100.9' 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:12.185 192.168.100.9' 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:12.185 192.168.100.9' 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2394894 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2394894 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2394894 ']' 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.185 18:07:20 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:12.443 [2024-12-09 18:07:20.193020] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:19:12.443 [2024-12-09 18:07:20.193070] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.443 [2024-12-09 18:07:20.285576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:12.443 [2024-12-09 18:07:20.327504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.443 [2024-12-09 18:07:20.327546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.443 [2024-12-09 18:07:20.327556] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.443 [2024-12-09 18:07:20.327565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.443 [2024-12-09 18:07:20.327572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.443 [2024-12-09 18:07:20.332966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.443 [2024-12-09 18:07:20.333023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.443 [2024-12-09 18:07:20.333154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.443 [2024-12-09 18:07:20.333155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:13.378 [2024-12-09 18:07:21.050836] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1264980/0x1268e70) succeed. 00:19:13.378 [2024-12-09 18:07:21.060186] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1266010/0x12aa510) succeed. 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:13.378 Malloc0 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:13.378 [2024-12-09 18:07:21.292096] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:13.378 [ 00:19:13.378 { 00:19:13.378 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:13.378 "subtype": "Discovery", 00:19:13.378 "listen_addresses": [ 00:19:13.378 { 00:19:13.378 "trtype": "RDMA", 00:19:13.378 "adrfam": "IPv4", 00:19:13.378 "traddr": "192.168.100.8", 00:19:13.378 "trsvcid": "4420" 00:19:13.378 } 00:19:13.378 ], 00:19:13.378 "allow_any_host": true, 00:19:13.378 "hosts": [] 00:19:13.378 }, 00:19:13.378 { 00:19:13.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.378 "subtype": "NVMe", 00:19:13.378 "listen_addresses": [ 00:19:13.378 { 00:19:13.378 "trtype": "RDMA", 00:19:13.378 "adrfam": "IPv4", 00:19:13.378 "traddr": "192.168.100.8", 00:19:13.378 "trsvcid": "4420" 00:19:13.378 } 00:19:13.378 ], 00:19:13.378 "allow_any_host": true, 00:19:13.378 "hosts": [], 00:19:13.378 "serial_number": "SPDK00000000000001", 00:19:13.378 "model_number": "SPDK bdev Controller", 00:19:13.378 "max_namespaces": 32, 00:19:13.378 "min_cntlid": 1, 00:19:13.378 "max_cntlid": 65519, 00:19:13.378 "namespaces": [ 00:19:13.378 { 00:19:13.378 "nsid": 1, 00:19:13.378 "bdev_name": "Malloc0", 00:19:13.378 "name": "Malloc0", 00:19:13.378 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:13.378 "eui64": "ABCDEF0123456789", 00:19:13.378 "uuid": "34f57599-16d6-4213-bc78-0542fa75ca1f" 00:19:13.378 } 00:19:13.378 ] 00:19:13.378 } 00:19:13.378 ] 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.378 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:13.378 [2024-12-09 18:07:21.350342] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:19:13.378 [2024-12-09 18:07:21.350380] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395049 ] 00:19:13.640 [2024-12-09 18:07:21.413087] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:19:13.640 [2024-12-09 18:07:21.413163] nvme_rdma.c:2448:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:19:13.640 [2024-12-09 18:07:21.413180] nvme_rdma.c:1235:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:19:13.640 [2024-12-09 18:07:21.413186] nvme_rdma.c:1239:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:19:13.640 [2024-12-09 18:07:21.413223] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:19:13.640 [2024-12-09 18:07:21.431424] nvme_rdma.c: 456:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:19:13.640 [2024-12-09 18:07:21.441557] nvme_rdma.c:1121:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:13.640 [2024-12-09 18:07:21.441568] nvme_rdma.c:1126:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:19:13.640 [2024-12-09 18:07:21.441576] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181d00 00:19:13.640 [2024-12-09 18:07:21.441586] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181d00 00:19:13.640 [2024-12-09 18:07:21.441593] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181d00 00:19:13.640 [2024-12-09 18:07:21.441599] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181d00 00:19:13.640 [2024-12-09 18:07:21.441605] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181d00 00:19:13.640 [2024-12-09 18:07:21.441611] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181d00 00:19:13.640 [2024-12-09 18:07:21.441617] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181d00 00:19:13.640 [2024-12-09 18:07:21.441623] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x181d00 00:19:13.640 [2024-12-09 18:07:21.441629] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x181d00 00:19:13.640 [2024-12-09 18:07:21.441636] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x181d00 00:19:13.640 [2024-12-09 18:07:21.441642] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x181d00 00:19:13.640 [2024-12-09 18:07:21.441648] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x181d00 00:19:13.640 [2024-12-09 18:07:21.441654] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x181d00 00:19:13.640 [2024-12-09 18:07:21.441660] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x181d00 00:19:13.640 [2024-12-09 18:07:21.441666] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.441672] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.441678] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.441684] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.441690] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.441696] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.441703] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.441709] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.441715] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.441721] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.441727] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.441733] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.441739] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.441745] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.441751] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.441757] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.441764] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.441769] nvme_rdma.c:1140:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:19:13.641 [2024-12-09 18:07:21.441775] nvme_rdma.c:1143:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:13.641 [2024-12-09 18:07:21.441781] nvme_rdma.c:1148:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:19:13.641 [2024-12-09 18:07:21.441802] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.441816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd0c0 len:0x400 key:0x181d00 00:19:13.641 [2024-12-09 18:07:21.446954] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.641 [2024-12-09 18:07:21.446964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:13.641 [2024-12-09 18:07:21.446971] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.446979] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:13.641 [2024-12-09 18:07:21.446986] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:19:13.641 [2024-12-09 18:07:21.446993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:19:13.641 [2024-12-09 18:07:21.447007] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.447016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.641 [2024-12-09 18:07:21.447039] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.641 [2024-12-09 18:07:21.447045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:19:13.641 [2024-12-09 18:07:21.447052] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:19:13.641 [2024-12-09 18:07:21.447058] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.447065] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:19:13.641 [2024-12-09 18:07:21.447073] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.447080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.641 [2024-12-09 18:07:21.447097] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.641 [2024-12-09 18:07:21.447103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:19:13.641 [2024-12-09 18:07:21.447110] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:19:13.641 [2024-12-09 18:07:21.447116] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.447123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:19:13.641 [2024-12-09 18:07:21.447130] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.447138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.641 [2024-12-09 18:07:21.447157] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.641 [2024-12-09 18:07:21.447162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:13.641 [2024-12-09 18:07:21.447169] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:13.641 [2024-12-09 18:07:21.447177] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.447185] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.447193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.641 [2024-12-09 18:07:21.447212] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.641 [2024-12-09 18:07:21.447218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:13.641 [2024-12-09 18:07:21.447224] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:19:13.641 [2024-12-09 18:07:21.447230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:19:13.641 [2024-12-09 18:07:21.447236] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.447243] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:13.641 [2024-12-09 18:07:21.447353] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:19:13.641 [2024-12-09 18:07:21.447359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:13.641 [2024-12-09 18:07:21.447369] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.447376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.641 [2024-12-09 18:07:21.447399] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.641 [2024-12-09 18:07:21.447405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:13.641 [2024-12-09 18:07:21.447411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:13.641 [2024-12-09 18:07:21.447417] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.447425] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.447433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.641 [2024-12-09 18:07:21.447452] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.641 [2024-12-09 18:07:21.447458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:13.641 [2024-12-09 18:07:21.447464] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:13.641 [2024-12-09 18:07:21.447470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:19:13.641 [2024-12-09 18:07:21.447475] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.447483] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:19:13.641 [2024-12-09 18:07:21.447496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:19:13.641 [2024-12-09 18:07:21.447506] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.447516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181d00 00:19:13.641 [2024-12-09 18:07:21.447556] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.641 [2024-12-09 18:07:21.447562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:13.641 [2024-12-09 18:07:21.447571] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:19:13.641 [2024-12-09 18:07:21.447577] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:19:13.641 [2024-12-09 18:07:21.447583] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:19:13.641 [2024-12-09 18:07:21.447589] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:19:13.641 [2024-12-09 18:07:21.447597] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:19:13.641 [2024-12-09 18:07:21.447603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:19:13.641 [2024-12-09 18:07:21.447609] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.447616] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:19:13.641 [2024-12-09 18:07:21.447624] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.641 [2024-12-09 18:07:21.447632] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.641 [2024-12-09 18:07:21.447658] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.641 [2024-12-09 18:07:21.447664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:13.641 [2024-12-09 18:07:21.447675] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce3c0 length 0x40 lkey 0x181d00 00:19:13.642 [2024-12-09 18:07:21.447682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.642 [2024-12-09 18:07:21.447689] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce500 length 0x40 lkey 0x181d00 00:19:13.642 [2024-12-09 18:07:21.447696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.642 [2024-12-09 18:07:21.447703] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.642 [2024-12-09 18:07:21.447710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.642 [2024-12-09 18:07:21.447717] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x181d00 00:19:13.642 [2024-12-09 18:07:21.447724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.642 [2024-12-09 18:07:21.447729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:13.642 [2024-12-09 18:07:21.447735] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x181d00 00:19:13.642 [2024-12-09 18:07:21.447743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:13.642 [2024-12-09 18:07:21.447751] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.642 [2024-12-09 18:07:21.447761] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.642 [2024-12-09 18:07:21.447785] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.642 [2024-12-09 18:07:21.447791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:19:13.642 [2024-12-09 18:07:21.447800] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:19:13.642 [2024-12-09 18:07:21.447806] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:19:13.642 [2024-12-09 18:07:21.447812] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x181d00 00:19:13.642 [2024-12-09 18:07:21.447821] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.642 [2024-12-09 18:07:21.447829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181d00 00:19:13.642 [2024-12-09 18:07:21.447854] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.642 [2024-12-09 18:07:21.447860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:13.642 [2024-12-09 18:07:21.447867] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x181d00 00:19:13.642 [2024-12-09 18:07:21.447877] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:19:13.642 [2024-12-09 18:07:21.447903] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.642 [2024-12-09 18:07:21.447911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd000 len:0x400 key:0x181d00 00:19:13.642 [2024-12-09 18:07:21.447919] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x181d00 00:19:13.642 [2024-12-09 18:07:21.447926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.642 [2024-12-09 18:07:21.447952] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.642 [2024-12-09 18:07:21.447958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:13.642 [2024-12-09 18:07:21.447969] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cea00 length 0x40 lkey 0x181d00 00:19:13.642 [2024-12-09 18:07:21.447977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0xc00 key:0x181d00 00:19:13.642 [2024-12-09 18:07:21.447983] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x181d00 00:19:13.642 [2024-12-09 18:07:21.447990] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.642 [2024-12-09 18:07:21.447995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:13.642 [2024-12-09 18:07:21.448001] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x181d00 00:19:13.642 [2024-12-09 18:07:21.448007] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.642 [2024-12-09 18:07:21.448013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:13.642 [2024-12-09 18:07:21.448023] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x181d00 00:19:13.642 [2024-12-09 18:07:21.448030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd000 len:0x8 key:0x181d00 00:19:13.642 [2024-12-09 18:07:21.448038] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x181d00 00:19:13.642 [2024-12-09 18:07:21.448057] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.642 [2024-12-09 18:07:21.448063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:13.642 [2024-12-09 18:07:21.448073] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x181d00 00:19:13.642 ===================================================== 00:19:13.642 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:13.642 ===================================================== 00:19:13.642 Controller Capabilities/Features 00:19:13.642 ================================ 00:19:13.642 Vendor ID: 0000 00:19:13.642 Subsystem Vendor ID: 0000 00:19:13.642 Serial Number: .................... 00:19:13.642 Model Number: ........................................ 00:19:13.642 Firmware Version: 25.01 00:19:13.642 Recommended Arb Burst: 0 00:19:13.642 IEEE OUI Identifier: 00 00 00 00:19:13.642 Multi-path I/O 00:19:13.642 May have multiple subsystem ports: No 00:19:13.642 May have multiple controllers: No 00:19:13.642 Associated with SR-IOV VF: No 00:19:13.642 Max Data Transfer Size: 131072 00:19:13.642 Max Number of Namespaces: 0 00:19:13.642 Max Number of I/O Queues: 1024 00:19:13.642 NVMe Specification Version (VS): 1.3 00:19:13.642 NVMe Specification Version (Identify): 1.3 00:19:13.642 Maximum Queue Entries: 128 00:19:13.642 Contiguous Queues Required: Yes 00:19:13.642 Arbitration Mechanisms Supported 00:19:13.642 Weighted Round Robin: Not Supported 00:19:13.642 Vendor Specific: Not Supported 00:19:13.642 Reset Timeout: 15000 ms 00:19:13.642 Doorbell Stride: 4 bytes 00:19:13.642 NVM Subsystem Reset: Not Supported 00:19:13.642 Command Sets Supported 00:19:13.642 NVM Command Set: Supported 00:19:13.642 Boot Partition: Not Supported 00:19:13.642 Memory Page Size Minimum: 4096 bytes 00:19:13.642 Memory Page Size Maximum: 4096 bytes 00:19:13.642 Persistent Memory Region: Not Supported 00:19:13.642 Optional Asynchronous Events Supported 00:19:13.642 Namespace Attribute Notices: Not Supported 00:19:13.642 Firmware Activation Notices: Not Supported 00:19:13.642 ANA Change Notices: Not Supported 00:19:13.642 PLE Aggregate Log Change Notices: Not Supported 00:19:13.642 LBA Status Info Alert Notices: Not Supported 00:19:13.642 EGE Aggregate Log Change Notices: Not Supported 00:19:13.642 Normal NVM Subsystem Shutdown event: Not Supported 00:19:13.642 Zone Descriptor Change Notices: Not Supported 00:19:13.642 Discovery Log Change Notices: Supported 00:19:13.642 Controller Attributes 00:19:13.642 128-bit Host Identifier: Not Supported 00:19:13.642 Non-Operational Permissive Mode: Not Supported 00:19:13.642 NVM Sets: Not Supported 00:19:13.642 Read Recovery Levels: Not Supported 00:19:13.642 Endurance Groups: Not Supported 00:19:13.642 Predictable Latency Mode: Not Supported 00:19:13.642 Traffic Based Keep ALive: Not Supported 00:19:13.642 Namespace Granularity: Not Supported 00:19:13.642 SQ Associations: Not Supported 00:19:13.642 UUID List: Not Supported 00:19:13.642 Multi-Domain Subsystem: Not Supported 00:19:13.642 Fixed Capacity Management: Not Supported 00:19:13.642 Variable Capacity Management: Not Supported 00:19:13.642 Delete Endurance Group: Not Supported 00:19:13.642 Delete NVM Set: Not Supported 00:19:13.642 Extended LBA Formats Supported: Not Supported 00:19:13.642 Flexible Data Placement Supported: Not Supported 00:19:13.642 00:19:13.642 Controller Memory Buffer Support 00:19:13.642 ================================ 00:19:13.642 Supported: No 00:19:13.642 00:19:13.642 Persistent Memory Region Support 00:19:13.642 ================================ 00:19:13.642 Supported: No 00:19:13.642 00:19:13.642 Admin Command Set Attributes 00:19:13.642 ============================ 00:19:13.642 Security Send/Receive: Not Supported 00:19:13.642 Format NVM: Not Supported 00:19:13.642 Firmware Activate/Download: Not Supported 00:19:13.642 Namespace Management: Not Supported 00:19:13.642 Device Self-Test: Not Supported 00:19:13.642 Directives: Not Supported 00:19:13.642 NVMe-MI: Not Supported 00:19:13.642 Virtualization Management: Not Supported 00:19:13.642 Doorbell Buffer Config: Not Supported 00:19:13.642 Get LBA Status Capability: Not Supported 00:19:13.642 Command & Feature Lockdown Capability: Not Supported 00:19:13.642 Abort Command Limit: 1 00:19:13.642 Async Event Request Limit: 4 00:19:13.642 Number of Firmware Slots: N/A 00:19:13.642 Firmware Slot 1 Read-Only: N/A 00:19:13.642 Firmware Activation Without Reset: N/A 00:19:13.642 Multiple Update Detection Support: N/A 00:19:13.642 Firmware Update Granularity: No Information Provided 00:19:13.642 Per-Namespace SMART Log: No 00:19:13.642 Asymmetric Namespace Access Log Page: Not Supported 00:19:13.642 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:13.642 Command Effects Log Page: Not Supported 00:19:13.642 Get Log Page Extended Data: Supported 00:19:13.642 Telemetry Log Pages: Not Supported 00:19:13.642 Persistent Event Log Pages: Not Supported 00:19:13.643 Supported Log Pages Log Page: May Support 00:19:13.643 Commands Supported & Effects Log Page: Not Supported 00:19:13.643 Feature Identifiers & Effects Log Page:May Support 00:19:13.643 NVMe-MI Commands & Effects Log Page: May Support 00:19:13.643 Data Area 4 for Telemetry Log: Not Supported 00:19:13.643 Error Log Page Entries Supported: 128 00:19:13.643 Keep Alive: Not Supported 00:19:13.643 00:19:13.643 NVM Command Set Attributes 00:19:13.643 ========================== 00:19:13.643 Submission Queue Entry Size 00:19:13.643 Max: 1 00:19:13.643 Min: 1 00:19:13.643 Completion Queue Entry Size 00:19:13.643 Max: 1 00:19:13.643 Min: 1 00:19:13.643 Number of Namespaces: 0 00:19:13.643 Compare Command: Not Supported 00:19:13.643 Write Uncorrectable Command: Not Supported 00:19:13.643 Dataset Management Command: Not Supported 00:19:13.643 Write Zeroes Command: Not Supported 00:19:13.643 Set Features Save Field: Not Supported 00:19:13.643 Reservations: Not Supported 00:19:13.643 Timestamp: Not Supported 00:19:13.643 Copy: Not Supported 00:19:13.643 Volatile Write Cache: Not Present 00:19:13.643 Atomic Write Unit (Normal): 1 00:19:13.643 Atomic Write Unit (PFail): 1 00:19:13.643 Atomic Compare & Write Unit: 1 00:19:13.643 Fused Compare & Write: Supported 00:19:13.643 Scatter-Gather List 00:19:13.643 SGL Command Set: Supported 00:19:13.643 SGL Keyed: Supported 00:19:13.643 SGL Bit Bucket Descriptor: Not Supported 00:19:13.643 SGL Metadata Pointer: Not Supported 00:19:13.643 Oversized SGL: Not Supported 00:19:13.643 SGL Metadata Address: Not Supported 00:19:13.643 SGL Offset: Supported 00:19:13.643 Transport SGL Data Block: Not Supported 00:19:13.643 Replay Protected Memory Block: Not Supported 00:19:13.643 00:19:13.643 Firmware Slot Information 00:19:13.643 ========================= 00:19:13.643 Active slot: 0 00:19:13.643 00:19:13.643 00:19:13.643 Error Log 00:19:13.643 ========= 00:19:13.643 00:19:13.643 Active Namespaces 00:19:13.643 ================= 00:19:13.643 Discovery Log Page 00:19:13.643 ================== 00:19:13.643 Generation Counter: 2 00:19:13.643 Number of Records: 2 00:19:13.643 Record Format: 0 00:19:13.643 00:19:13.643 Discovery Log Entry 0 00:19:13.643 ---------------------- 00:19:13.643 Transport Type: 1 (RDMA) 00:19:13.643 Address Family: 1 (IPv4) 00:19:13.643 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:13.643 Entry Flags: 00:19:13.643 Duplicate Returned Information: 1 00:19:13.643 Explicit Persistent Connection Support for Discovery: 1 00:19:13.643 Transport Requirements: 00:19:13.643 Secure Channel: Not Required 00:19:13.643 Port ID: 0 (0x0000) 00:19:13.643 Controller ID: 65535 (0xffff) 00:19:13.643 Admin Max SQ Size: 128 00:19:13.643 Transport Service Identifier: 4420 00:19:13.643 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:13.643 Transport Address: 192.168.100.8 00:19:13.643 Transport Specific Address Subtype - RDMA 00:19:13.643 RDMA QP Service Type: 1 (Reliable Connected) 00:19:13.643 RDMA Provider Type: 1 (No provider specified) 00:19:13.643 RDMA CM Service: 1 (RDMA_CM) 00:19:13.643 Discovery Log Entry 1 00:19:13.643 ---------------------- 00:19:13.643 Transport Type: 1 (RDMA) 00:19:13.643 Address Family: 1 (IPv4) 00:19:13.643 Subsystem Type: 2 (NVM Subsystem) 00:19:13.643 Entry Flags: 00:19:13.643 Duplicate Returned Information: 0 00:19:13.643 Explicit Persistent Connection Support for Discovery: 0 00:19:13.643 Transport Requirements: 00:19:13.643 Secure Channel: Not Required 00:19:13.643 Port ID: 0 (0x0000) 00:19:13.643 Controller ID: 65535 (0xffff) 00:19:13.643 Admin Max SQ Size: [2024-12-09 18:07:21.448144] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:19:13.643 [2024-12-09 18:07:21.448154] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 35408 doesn't match qid 00:19:13.643 [2024-12-09 18:07:21.448169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32585 cdw0:4bd46b80 sqhd:2880 p:0 m:0 dnr:0 00:19:13.643 [2024-12-09 18:07:21.448175] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 35408 doesn't match qid 00:19:13.643 [2024-12-09 18:07:21.448183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32585 cdw0:4bd46b80 sqhd:2880 p:0 m:0 dnr:0 00:19:13.643 [2024-12-09 18:07:21.448189] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 35408 doesn't match qid 00:19:13.643 [2024-12-09 18:07:21.448197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32585 cdw0:4bd46b80 sqhd:2880 p:0 m:0 dnr:0 00:19:13.643 [2024-12-09 18:07:21.448203] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 35408 doesn't match qid 00:19:13.643 [2024-12-09 18:07:21.448211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32585 cdw0:4bd46b80 sqhd:2880 p:0 m:0 dnr:0 00:19:13.643 [2024-12-09 18:07:21.448220] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x181d00 00:19:13.643 [2024-12-09 18:07:21.448228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.643 [2024-12-09 18:07:21.448248] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.643 [2024-12-09 18:07:21.448254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:19:13.643 [2024-12-09 18:07:21.448265] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.643 [2024-12-09 18:07:21.448273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.643 [2024-12-09 18:07:21.448279] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x181d00 00:19:13.643 [2024-12-09 18:07:21.448297] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.643 [2024-12-09 18:07:21.448303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:13.643 [2024-12-09 18:07:21.448310] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:19:13.643 [2024-12-09 18:07:21.448316] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:19:13.643 [2024-12-09 18:07:21.448322] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x181d00 00:19:13.643 [2024-12-09 18:07:21.448330] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.643 [2024-12-09 18:07:21.448338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.643 [2024-12-09 18:07:21.448362] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.643 [2024-12-09 18:07:21.448369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:19:13.643 [2024-12-09 18:07:21.448376] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x181d00 00:19:13.643 [2024-12-09 18:07:21.448386] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.643 [2024-12-09 18:07:21.448393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.643 [2024-12-09 18:07:21.448409] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.643 [2024-12-09 18:07:21.448415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:19:13.643 [2024-12-09 18:07:21.448421] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x181d00 00:19:13.643 [2024-12-09 18:07:21.448430] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.643 [2024-12-09 18:07:21.448438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.643 [2024-12-09 18:07:21.448455] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.643 [2024-12-09 18:07:21.448461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:19:13.643 [2024-12-09 18:07:21.448467] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x181d00 00:19:13.643 [2024-12-09 18:07:21.448476] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.643 [2024-12-09 18:07:21.448484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.643 [2024-12-09 18:07:21.448499] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.643 [2024-12-09 18:07:21.448505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:19:13.643 [2024-12-09 18:07:21.448512] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x181d00 00:19:13.643 [2024-12-09 18:07:21.448521] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.643 [2024-12-09 18:07:21.448529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.643 [2024-12-09 18:07:21.448545] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.643 [2024-12-09 18:07:21.448550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:19:13.643 [2024-12-09 18:07:21.448557] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x181d00 00:19:13.643 [2024-12-09 18:07:21.448566] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.643 [2024-12-09 18:07:21.448574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.643 [2024-12-09 18:07:21.448597] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.643 [2024-12-09 18:07:21.448603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:19:13.643 [2024-12-09 18:07:21.448609] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x181d00 00:19:13.643 [2024-12-09 18:07:21.448618] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.448626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.448645] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.644 [2024-12-09 18:07:21.448652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:19:13.644 [2024-12-09 18:07:21.448658] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.448667] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.448675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.448693] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.644 [2024-12-09 18:07:21.448698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:19:13.644 [2024-12-09 18:07:21.448705] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.448714] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.448721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.448737] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.644 [2024-12-09 18:07:21.448742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:19:13.644 [2024-12-09 18:07:21.448749] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.448757] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.448765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.448784] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.644 [2024-12-09 18:07:21.448790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:13.644 [2024-12-09 18:07:21.448796] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.448805] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.448813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.448832] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.644 [2024-12-09 18:07:21.448838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:13.644 [2024-12-09 18:07:21.448844] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.448853] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.448861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.448882] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.644 [2024-12-09 18:07:21.448887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:13.644 [2024-12-09 18:07:21.448893] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.448902] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.448910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.448927] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.644 [2024-12-09 18:07:21.448932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:13.644 [2024-12-09 18:07:21.448939] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.448953] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.448961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.448978] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.644 [2024-12-09 18:07:21.448984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:13.644 [2024-12-09 18:07:21.448990] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.448999] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.449022] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.644 [2024-12-09 18:07:21.449028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:13.644 [2024-12-09 18:07:21.449034] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449043] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.449064] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.644 [2024-12-09 18:07:21.449070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:13.644 [2024-12-09 18:07:21.449076] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449085] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.449108] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.644 [2024-12-09 18:07:21.449114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:13.644 [2024-12-09 18:07:21.449120] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449128] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.449159] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.644 [2024-12-09 18:07:21.449165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:13.644 [2024-12-09 18:07:21.449171] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449180] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.449205] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.644 [2024-12-09 18:07:21.449210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:13.644 [2024-12-09 18:07:21.449216] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449225] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.449250] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.644 [2024-12-09 18:07:21.449256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:13.644 [2024-12-09 18:07:21.449262] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449271] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.449296] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.644 [2024-12-09 18:07:21.449301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:19:13.644 [2024-12-09 18:07:21.449307] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449316] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.449339] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.644 [2024-12-09 18:07:21.449345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:13.644 [2024-12-09 18:07:21.449351] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449360] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.449389] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.644 [2024-12-09 18:07:21.449394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:19:13.644 [2024-12-09 18:07:21.449400] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449409] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.449438] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.644 [2024-12-09 18:07:21.449443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:19:13.644 [2024-12-09 18:07:21.449450] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449458] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.644 [2024-12-09 18:07:21.449467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.644 [2024-12-09 18:07:21.449483] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.645 [2024-12-09 18:07:21.449489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:19:13.645 [2024-12-09 18:07:21.449495] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449504] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.645 [2024-12-09 18:07:21.449527] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.645 [2024-12-09 18:07:21.449532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:19:13.645 [2024-12-09 18:07:21.449539] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449547] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.645 [2024-12-09 18:07:21.449576] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.645 [2024-12-09 18:07:21.449582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:19:13.645 [2024-12-09 18:07:21.449588] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449597] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.645 [2024-12-09 18:07:21.449622] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.645 [2024-12-09 18:07:21.449627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:19:13.645 [2024-12-09 18:07:21.449634] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449642] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.645 [2024-12-09 18:07:21.449665] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.645 [2024-12-09 18:07:21.449671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:19:13.645 [2024-12-09 18:07:21.449677] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449686] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.645 [2024-12-09 18:07:21.449713] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.645 [2024-12-09 18:07:21.449718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:19:13.645 [2024-12-09 18:07:21.449725] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449733] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.645 [2024-12-09 18:07:21.449762] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.645 [2024-12-09 18:07:21.449767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:19:13.645 [2024-12-09 18:07:21.449774] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449782] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.645 [2024-12-09 18:07:21.449809] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.645 [2024-12-09 18:07:21.449815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:19:13.645 [2024-12-09 18:07:21.449821] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449830] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.645 [2024-12-09 18:07:21.449853] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.645 [2024-12-09 18:07:21.449859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:19:13.645 [2024-12-09 18:07:21.449865] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449874] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.645 [2024-12-09 18:07:21.449899] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.645 [2024-12-09 18:07:21.449904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:19:13.645 [2024-12-09 18:07:21.449911] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449919] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.645 [2024-12-09 18:07:21.449950] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.645 [2024-12-09 18:07:21.449956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:19:13.645 [2024-12-09 18:07:21.449963] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449971] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.449979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.645 [2024-12-09 18:07:21.449998] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.645 [2024-12-09 18:07:21.450004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:19:13.645 [2024-12-09 18:07:21.450010] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.450020] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.450028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.645 [2024-12-09 18:07:21.450043] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.645 [2024-12-09 18:07:21.450049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:19:13.645 [2024-12-09 18:07:21.450055] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.450064] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.450072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.645 [2024-12-09 18:07:21.450095] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.645 [2024-12-09 18:07:21.450100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:19:13.645 [2024-12-09 18:07:21.450106] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.450115] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.450123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.645 [2024-12-09 18:07:21.450142] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.645 [2024-12-09 18:07:21.450147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:19:13.645 [2024-12-09 18:07:21.450154] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.450162] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.450171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.645 [2024-12-09 18:07:21.450190] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.645 [2024-12-09 18:07:21.450195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:19:13.645 [2024-12-09 18:07:21.450202] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.450210] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.645 [2024-12-09 18:07:21.450218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.646 [2024-12-09 18:07:21.450235] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.646 [2024-12-09 18:07:21.450241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:19:13.646 [2024-12-09 18:07:21.450247] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450256] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.646 [2024-12-09 18:07:21.450283] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.646 [2024-12-09 18:07:21.450288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:13.646 [2024-12-09 18:07:21.450295] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450305] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.646 [2024-12-09 18:07:21.450326] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.646 [2024-12-09 18:07:21.450332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:13.646 [2024-12-09 18:07:21.450338] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450347] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.646 [2024-12-09 18:07:21.450370] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.646 [2024-12-09 18:07:21.450376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:13.646 [2024-12-09 18:07:21.450382] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450391] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.646 [2024-12-09 18:07:21.450421] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.646 [2024-12-09 18:07:21.450427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:13.646 [2024-12-09 18:07:21.450433] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450442] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.646 [2024-12-09 18:07:21.450469] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.646 [2024-12-09 18:07:21.450474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:13.646 [2024-12-09 18:07:21.450481] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450489] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.646 [2024-12-09 18:07:21.450512] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.646 [2024-12-09 18:07:21.450518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:13.646 [2024-12-09 18:07:21.450524] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450533] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.646 [2024-12-09 18:07:21.450556] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.646 [2024-12-09 18:07:21.450562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:13.646 [2024-12-09 18:07:21.450569] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450578] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.646 [2024-12-09 18:07:21.450605] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.646 [2024-12-09 18:07:21.450610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:13.646 [2024-12-09 18:07:21.450617] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450625] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.646 [2024-12-09 18:07:21.450652] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.646 [2024-12-09 18:07:21.450658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:13.646 [2024-12-09 18:07:21.450664] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450673] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.646 [2024-12-09 18:07:21.450704] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.646 [2024-12-09 18:07:21.450709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:13.646 [2024-12-09 18:07:21.450715] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450724] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.646 [2024-12-09 18:07:21.450755] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.646 [2024-12-09 18:07:21.450760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:13.646 [2024-12-09 18:07:21.450767] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450775] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.646 [2024-12-09 18:07:21.450802] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.646 [2024-12-09 18:07:21.450808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:19:13.646 [2024-12-09 18:07:21.450814] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450822] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.646 [2024-12-09 18:07:21.450855] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.646 [2024-12-09 18:07:21.450862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:13.646 [2024-12-09 18:07:21.450868] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450877] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.646 [2024-12-09 18:07:21.450900] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.646 [2024-12-09 18:07:21.450906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:19:13.646 [2024-12-09 18:07:21.450912] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450921] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.450929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.646 [2024-12-09 18:07:21.450944] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.646 [2024-12-09 18:07:21.454957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:19:13.646 [2024-12-09 18:07:21.454964] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.454973] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.454981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.646 [2024-12-09 18:07:21.455000] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.646 [2024-12-09 18:07:21.455006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000a p:0 m:0 dnr:0 00:19:13.646 [2024-12-09 18:07:21.455012] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x181d00 00:19:13.646 [2024-12-09 18:07:21.455020] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:19:13.646 128 00:19:13.646 Transport Service Identifier: 4420 00:19:13.646 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:13.646 Transport Address: 192.168.100.8 00:19:13.646 Transport Specific Address Subtype - RDMA 00:19:13.646 RDMA QP Service Type: 1 (Reliable Connected) 00:19:13.646 RDMA Provider Type: 1 (No provider specified) 00:19:13.646 RDMA CM Service: 1 (RDMA_CM) 00:19:13.646 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:13.646 [2024-12-09 18:07:21.528642] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:19:13.647 [2024-12-09 18:07:21.528692] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395134 ] 00:19:13.647 [2024-12-09 18:07:21.590147] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:19:13.647 [2024-12-09 18:07:21.590217] nvme_rdma.c:2448:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:19:13.647 [2024-12-09 18:07:21.590235] nvme_rdma.c:1235:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:19:13.647 [2024-12-09 18:07:21.590241] nvme_rdma.c:1239:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:19:13.647 [2024-12-09 18:07:21.590269] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:19:13.647 [2024-12-09 18:07:21.600740] nvme_rdma.c: 456:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:19:13.647 [2024-12-09 18:07:21.610859] nvme_rdma.c:1121:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:13.647 [2024-12-09 18:07:21.610870] nvme_rdma.c:1126:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:19:13.647 [2024-12-09 18:07:21.610877] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610884] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610890] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610896] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610903] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610909] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610915] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610921] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610927] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610933] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610940] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610949] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610956] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610962] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610968] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610974] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610980] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610986] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610993] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.610999] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.611005] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.611011] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.611017] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.611023] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.611030] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.611036] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.611046] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.611053] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.611059] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.611065] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.611071] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.611077] nvme_rdma.c:1140:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:19:13.647 [2024-12-09 18:07:21.611082] nvme_rdma.c:1143:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:13.647 [2024-12-09 18:07:21.611087] nvme_rdma.c:1148:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:19:13.647 [2024-12-09 18:07:21.611107] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.647 [2024-12-09 18:07:21.611119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd0c0 len:0x400 key:0x181d00 00:19:13.909 [2024-12-09 18:07:21.615952] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.909 [2024-12-09 18:07:21.615962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:13.909 [2024-12-09 18:07:21.615970] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181d00 00:19:13.909 [2024-12-09 18:07:21.615977] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:13.909 [2024-12-09 18:07:21.615985] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:19:13.909 [2024-12-09 18:07:21.615991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:19:13.909 [2024-12-09 18:07:21.616005] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.909 [2024-12-09 18:07:21.616014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.909 [2024-12-09 18:07:21.616034] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.909 [2024-12-09 18:07:21.616040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:19:13.909 [2024-12-09 18:07:21.616047] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:19:13.909 [2024-12-09 18:07:21.616053] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181d00 00:19:13.909 [2024-12-09 18:07:21.616060] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:19:13.909 [2024-12-09 18:07:21.616068] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.909 [2024-12-09 18:07:21.616075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.909 [2024-12-09 18:07:21.616093] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.909 [2024-12-09 18:07:21.616099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:19:13.909 [2024-12-09 18:07:21.616106] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:19:13.909 [2024-12-09 18:07:21.616112] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181d00 00:19:13.909 [2024-12-09 18:07:21.616119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:19:13.909 [2024-12-09 18:07:21.616129] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.909 [2024-12-09 18:07:21.616137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.909 [2024-12-09 18:07:21.616153] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.909 [2024-12-09 18:07:21.616159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:13.909 [2024-12-09 18:07:21.616165] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:13.909 [2024-12-09 18:07:21.616172] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181d00 00:19:13.909 [2024-12-09 18:07:21.616180] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.909 [2024-12-09 18:07:21.616188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.909 [2024-12-09 18:07:21.616206] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.909 [2024-12-09 18:07:21.616211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:13.909 [2024-12-09 18:07:21.616218] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:19:13.909 [2024-12-09 18:07:21.616223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:19:13.909 [2024-12-09 18:07:21.616230] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181d00 00:19:13.909 [2024-12-09 18:07:21.616236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:13.909 [2024-12-09 18:07:21.616346] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:19:13.910 [2024-12-09 18:07:21.616352] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:13.910 [2024-12-09 18:07:21.616361] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.616368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.910 [2024-12-09 18:07:21.616388] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.910 [2024-12-09 18:07:21.616394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:13.910 [2024-12-09 18:07:21.616400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:13.910 [2024-12-09 18:07:21.616406] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.616415] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.616423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.910 [2024-12-09 18:07:21.616439] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.910 [2024-12-09 18:07:21.616444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:13.910 [2024-12-09 18:07:21.616450] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:13.910 [2024-12-09 18:07:21.616458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:19:13.910 [2024-12-09 18:07:21.616464] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.616471] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:19:13.910 [2024-12-09 18:07:21.616479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:19:13.910 [2024-12-09 18:07:21.616489] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.616497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181d00 00:19:13.910 [2024-12-09 18:07:21.616548] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.910 [2024-12-09 18:07:21.616553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:13.910 [2024-12-09 18:07:21.616562] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:19:13.910 [2024-12-09 18:07:21.616568] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:19:13.910 [2024-12-09 18:07:21.616574] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:19:13.910 [2024-12-09 18:07:21.616579] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:19:13.910 [2024-12-09 18:07:21.616587] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:19:13.910 [2024-12-09 18:07:21.616593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:19:13.910 [2024-12-09 18:07:21.616599] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.616606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:19:13.910 [2024-12-09 18:07:21.616614] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.616622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.910 [2024-12-09 18:07:21.616646] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.910 [2024-12-09 18:07:21.616652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:13.910 [2024-12-09 18:07:21.616661] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce3c0 length 0x40 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.616669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.910 [2024-12-09 18:07:21.616676] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce500 length 0x40 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.616683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.910 [2024-12-09 18:07:21.616690] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.616696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.910 [2024-12-09 18:07:21.616703] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.616710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.910 [2024-12-09 18:07:21.616718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:13.910 [2024-12-09 18:07:21.616724] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.616732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:13.910 [2024-12-09 18:07:21.616739] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.616747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.910 [2024-12-09 18:07:21.616763] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.910 [2024-12-09 18:07:21.616769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:19:13.910 [2024-12-09 18:07:21.616777] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:19:13.910 [2024-12-09 18:07:21.616784] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:13.910 [2024-12-09 18:07:21.616790] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.616797] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:19:13.910 [2024-12-09 18:07:21.616804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:13.910 [2024-12-09 18:07:21.616811] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.616819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.910 [2024-12-09 18:07:21.616844] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.910 [2024-12-09 18:07:21.616850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:19:13.910 [2024-12-09 18:07:21.616902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:19:13.910 [2024-12-09 18:07:21.616908] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.616916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:13.910 [2024-12-09 18:07:21.616924] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.616932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ca000 len:0x1000 key:0x181d00 00:19:13.910 [2024-12-09 18:07:21.616964] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.910 [2024-12-09 18:07:21.616970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:13.910 [2024-12-09 18:07:21.616982] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:19:13.910 [2024-12-09 18:07:21.616996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:19:13.910 [2024-12-09 18:07:21.617002] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.617011] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:19:13.910 [2024-12-09 18:07:21.617020] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.617028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181d00 00:19:13.910 [2024-12-09 18:07:21.617058] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.910 [2024-12-09 18:07:21.617063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:13.910 [2024-12-09 18:07:21.617076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:13.910 [2024-12-09 18:07:21.617082] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.617090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:13.910 [2024-12-09 18:07:21.617098] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.617106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181d00 00:19:13.910 [2024-12-09 18:07:21.617128] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.910 [2024-12-09 18:07:21.617133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:13.910 [2024-12-09 18:07:21.617142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:13.910 [2024-12-09 18:07:21.617148] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x181d00 00:19:13.910 [2024-12-09 18:07:21.617155] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:19:13.910 [2024-12-09 18:07:21.617163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:19:13.911 [2024-12-09 18:07:21.617170] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:13.911 [2024-12-09 18:07:21.617177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:13.911 [2024-12-09 18:07:21.617183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:19:13.911 [2024-12-09 18:07:21.617189] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:19:13.911 [2024-12-09 18:07:21.617195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:19:13.911 [2024-12-09 18:07:21.617201] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:19:13.911 [2024-12-09 18:07:21.617215] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.911 [2024-12-09 18:07:21.617223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.911 [2024-12-09 18:07:21.617231] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x181d00 00:19:13.911 [2024-12-09 18:07:21.617238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.911 [2024-12-09 18:07:21.617250] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.911 [2024-12-09 18:07:21.617255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:13.911 [2024-12-09 18:07:21.617262] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x181d00 00:19:13.911 [2024-12-09 18:07:21.617271] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.911 [2024-12-09 18:07:21.617279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.911 [2024-12-09 18:07:21.617287] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.911 [2024-12-09 18:07:21.617293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:13.911 [2024-12-09 18:07:21.617299] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x181d00 00:19:13.911 [2024-12-09 18:07:21.617305] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.911 [2024-12-09 18:07:21.617311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:13.911 [2024-12-09 18:07:21.617317] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x181d00 00:19:13.911 [2024-12-09 18:07:21.617326] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.911 [2024-12-09 18:07:21.617333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.911 [2024-12-09 18:07:21.617351] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.911 [2024-12-09 18:07:21.617356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:13.911 [2024-12-09 18:07:21.617363] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x181d00 00:19:13.911 [2024-12-09 18:07:21.617371] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.911 [2024-12-09 18:07:21.617379] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.911 [2024-12-09 18:07:21.617399] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.911 [2024-12-09 18:07:21.617405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:19:13.911 [2024-12-09 18:07:21.617411] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x181d00 00:19:13.911 [2024-12-09 18:07:21.617425] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x181d00 00:19:13.911 [2024-12-09 18:07:21.617433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x2000 key:0x181d00 00:19:13.911 [2024-12-09 18:07:21.617441] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x181d00 00:19:13.911 [2024-12-09 18:07:21.617449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd000 len:0x200 key:0x181d00 00:19:13.911 [2024-12-09 18:07:21.617457] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cea00 length 0x40 lkey 0x181d00 00:19:13.911 [2024-12-09 18:07:21.617464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x200 key:0x181d00 00:19:13.911 [2024-12-09 18:07:21.617476] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ceb40 length 0x40 lkey 0x181d00 00:19:13.911 [2024-12-09 18:07:21.617484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c5000 len:0x1000 key:0x181d00 00:19:13.911 [2024-12-09 18:07:21.617493] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.911 [2024-12-09 18:07:21.617499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:13.911 [2024-12-09 18:07:21.617510] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x181d00 00:19:13.911 [2024-12-09 18:07:21.617517] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.911 [2024-12-09 18:07:21.617522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:13.911 [2024-12-09 18:07:21.617533] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x181d00 00:19:13.911 [2024-12-09 18:07:21.617540] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.911 [2024-12-09 18:07:21.617545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:13.911 [2024-12-09 18:07:21.617552] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x181d00 00:19:13.911 [2024-12-09 18:07:21.617558] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.911 [2024-12-09 18:07:21.617563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:13.911 [2024-12-09 18:07:21.617573] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x181d00 00:19:13.911 ===================================================== 00:19:13.911 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:13.911 ===================================================== 00:19:13.911 Controller Capabilities/Features 00:19:13.911 ================================ 00:19:13.911 Vendor ID: 8086 00:19:13.911 Subsystem Vendor ID: 8086 00:19:13.911 Serial Number: SPDK00000000000001 00:19:13.911 Model Number: SPDK bdev Controller 00:19:13.911 Firmware Version: 25.01 00:19:13.911 Recommended Arb Burst: 6 00:19:13.911 IEEE OUI Identifier: e4 d2 5c 00:19:13.911 Multi-path I/O 00:19:13.911 May have multiple subsystem ports: Yes 00:19:13.911 May have multiple controllers: Yes 00:19:13.911 Associated with SR-IOV VF: No 00:19:13.911 Max Data Transfer Size: 131072 00:19:13.911 Max Number of Namespaces: 32 00:19:13.911 Max Number of I/O Queues: 127 00:19:13.911 NVMe Specification Version (VS): 1.3 00:19:13.911 NVMe Specification Version (Identify): 1.3 00:19:13.911 Maximum Queue Entries: 128 00:19:13.911 Contiguous Queues Required: Yes 00:19:13.911 Arbitration Mechanisms Supported 00:19:13.911 Weighted Round Robin: Not Supported 00:19:13.911 Vendor Specific: Not Supported 00:19:13.911 Reset Timeout: 15000 ms 00:19:13.911 Doorbell Stride: 4 bytes 00:19:13.911 NVM Subsystem Reset: Not Supported 00:19:13.911 Command Sets Supported 00:19:13.911 NVM Command Set: Supported 00:19:13.911 Boot Partition: Not Supported 00:19:13.911 Memory Page Size Minimum: 4096 bytes 00:19:13.911 Memory Page Size Maximum: 4096 bytes 00:19:13.911 Persistent Memory Region: Not Supported 00:19:13.911 Optional Asynchronous Events Supported 00:19:13.911 Namespace Attribute Notices: Supported 00:19:13.911 Firmware Activation Notices: Not Supported 00:19:13.911 ANA Change Notices: Not Supported 00:19:13.911 PLE Aggregate Log Change Notices: Not Supported 00:19:13.911 LBA Status Info Alert Notices: Not Supported 00:19:13.911 EGE Aggregate Log Change Notices: Not Supported 00:19:13.911 Normal NVM Subsystem Shutdown event: Not Supported 00:19:13.911 Zone Descriptor Change Notices: Not Supported 00:19:13.911 Discovery Log Change Notices: Not Supported 00:19:13.911 Controller Attributes 00:19:13.911 128-bit Host Identifier: Supported 00:19:13.911 Non-Operational Permissive Mode: Not Supported 00:19:13.911 NVM Sets: Not Supported 00:19:13.911 Read Recovery Levels: Not Supported 00:19:13.911 Endurance Groups: Not Supported 00:19:13.911 Predictable Latency Mode: Not Supported 00:19:13.911 Traffic Based Keep ALive: Not Supported 00:19:13.911 Namespace Granularity: Not Supported 00:19:13.911 SQ Associations: Not Supported 00:19:13.911 UUID List: Not Supported 00:19:13.911 Multi-Domain Subsystem: Not Supported 00:19:13.911 Fixed Capacity Management: Not Supported 00:19:13.911 Variable Capacity Management: Not Supported 00:19:13.911 Delete Endurance Group: Not Supported 00:19:13.911 Delete NVM Set: Not Supported 00:19:13.911 Extended LBA Formats Supported: Not Supported 00:19:13.911 Flexible Data Placement Supported: Not Supported 00:19:13.911 00:19:13.911 Controller Memory Buffer Support 00:19:13.911 ================================ 00:19:13.911 Supported: No 00:19:13.911 00:19:13.911 Persistent Memory Region Support 00:19:13.911 ================================ 00:19:13.911 Supported: No 00:19:13.911 00:19:13.911 Admin Command Set Attributes 00:19:13.911 ============================ 00:19:13.911 Security Send/Receive: Not Supported 00:19:13.911 Format NVM: Not Supported 00:19:13.911 Firmware Activate/Download: Not Supported 00:19:13.911 Namespace Management: Not Supported 00:19:13.911 Device Self-Test: Not Supported 00:19:13.911 Directives: Not Supported 00:19:13.912 NVMe-MI: Not Supported 00:19:13.912 Virtualization Management: Not Supported 00:19:13.912 Doorbell Buffer Config: Not Supported 00:19:13.912 Get LBA Status Capability: Not Supported 00:19:13.912 Command & Feature Lockdown Capability: Not Supported 00:19:13.912 Abort Command Limit: 4 00:19:13.912 Async Event Request Limit: 4 00:19:13.912 Number of Firmware Slots: N/A 00:19:13.912 Firmware Slot 1 Read-Only: N/A 00:19:13.912 Firmware Activation Without Reset: N/A 00:19:13.912 Multiple Update Detection Support: N/A 00:19:13.912 Firmware Update Granularity: No Information Provided 00:19:13.912 Per-Namespace SMART Log: No 00:19:13.912 Asymmetric Namespace Access Log Page: Not Supported 00:19:13.912 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:13.912 Command Effects Log Page: Supported 00:19:13.912 Get Log Page Extended Data: Supported 00:19:13.912 Telemetry Log Pages: Not Supported 00:19:13.912 Persistent Event Log Pages: Not Supported 00:19:13.912 Supported Log Pages Log Page: May Support 00:19:13.912 Commands Supported & Effects Log Page: Not Supported 00:19:13.912 Feature Identifiers & Effects Log Page:May Support 00:19:13.912 NVMe-MI Commands & Effects Log Page: May Support 00:19:13.912 Data Area 4 for Telemetry Log: Not Supported 00:19:13.912 Error Log Page Entries Supported: 128 00:19:13.912 Keep Alive: Supported 00:19:13.912 Keep Alive Granularity: 10000 ms 00:19:13.912 00:19:13.912 NVM Command Set Attributes 00:19:13.912 ========================== 00:19:13.912 Submission Queue Entry Size 00:19:13.912 Max: 64 00:19:13.912 Min: 64 00:19:13.912 Completion Queue Entry Size 00:19:13.912 Max: 16 00:19:13.912 Min: 16 00:19:13.912 Number of Namespaces: 32 00:19:13.912 Compare Command: Supported 00:19:13.912 Write Uncorrectable Command: Not Supported 00:19:13.912 Dataset Management Command: Supported 00:19:13.912 Write Zeroes Command: Supported 00:19:13.912 Set Features Save Field: Not Supported 00:19:13.912 Reservations: Supported 00:19:13.912 Timestamp: Not Supported 00:19:13.912 Copy: Supported 00:19:13.912 Volatile Write Cache: Present 00:19:13.912 Atomic Write Unit (Normal): 1 00:19:13.912 Atomic Write Unit (PFail): 1 00:19:13.912 Atomic Compare & Write Unit: 1 00:19:13.912 Fused Compare & Write: Supported 00:19:13.912 Scatter-Gather List 00:19:13.912 SGL Command Set: Supported 00:19:13.912 SGL Keyed: Supported 00:19:13.912 SGL Bit Bucket Descriptor: Not Supported 00:19:13.912 SGL Metadata Pointer: Not Supported 00:19:13.912 Oversized SGL: Not Supported 00:19:13.912 SGL Metadata Address: Not Supported 00:19:13.912 SGL Offset: Supported 00:19:13.912 Transport SGL Data Block: Not Supported 00:19:13.912 Replay Protected Memory Block: Not Supported 00:19:13.912 00:19:13.912 Firmware Slot Information 00:19:13.912 ========================= 00:19:13.912 Active slot: 1 00:19:13.912 Slot 1 Firmware Revision: 25.01 00:19:13.912 00:19:13.912 00:19:13.912 Commands Supported and Effects 00:19:13.912 ============================== 00:19:13.912 Admin Commands 00:19:13.912 -------------- 00:19:13.912 Get Log Page (02h): Supported 00:19:13.912 Identify (06h): Supported 00:19:13.912 Abort (08h): Supported 00:19:13.912 Set Features (09h): Supported 00:19:13.912 Get Features (0Ah): Supported 00:19:13.912 Asynchronous Event Request (0Ch): Supported 00:19:13.912 Keep Alive (18h): Supported 00:19:13.912 I/O Commands 00:19:13.912 ------------ 00:19:13.912 Flush (00h): Supported LBA-Change 00:19:13.912 Write (01h): Supported LBA-Change 00:19:13.912 Read (02h): Supported 00:19:13.912 Compare (05h): Supported 00:19:13.912 Write Zeroes (08h): Supported LBA-Change 00:19:13.912 Dataset Management (09h): Supported LBA-Change 00:19:13.912 Copy (19h): Supported LBA-Change 00:19:13.912 00:19:13.912 Error Log 00:19:13.912 ========= 00:19:13.912 00:19:13.912 Arbitration 00:19:13.912 =========== 00:19:13.912 Arbitration Burst: 1 00:19:13.912 00:19:13.912 Power Management 00:19:13.912 ================ 00:19:13.912 Number of Power States: 1 00:19:13.912 Current Power State: Power State #0 00:19:13.912 Power State #0: 00:19:13.912 Max Power: 0.00 W 00:19:13.912 Non-Operational State: Operational 00:19:13.912 Entry Latency: Not Reported 00:19:13.912 Exit Latency: Not Reported 00:19:13.912 Relative Read Throughput: 0 00:19:13.912 Relative Read Latency: 0 00:19:13.912 Relative Write Throughput: 0 00:19:13.912 Relative Write Latency: 0 00:19:13.912 Idle Power: Not Reported 00:19:13.912 Active Power: Not Reported 00:19:13.912 Non-Operational Permissive Mode: Not Supported 00:19:13.912 00:19:13.912 Health Information 00:19:13.912 ================== 00:19:13.912 Critical Warnings: 00:19:13.912 Available Spare Space: OK 00:19:13.912 Temperature: OK 00:19:13.912 Device Reliability: OK 00:19:13.912 Read Only: No 00:19:13.912 Volatile Memory Backup: OK 00:19:13.912 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:13.912 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:13.912 Available Spare: 0% 00:19:13.912 Available Spare Threshold: 0% 00:19:13.912 Life Percentage [2024-12-09 18:07:21.617653] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ceb40 length 0x40 lkey 0x181d00 00:19:13.912 [2024-12-09 18:07:21.617661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.912 [2024-12-09 18:07:21.617679] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.912 [2024-12-09 18:07:21.617685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:13.912 [2024-12-09 18:07:21.617691] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x181d00 00:19:13.912 [2024-12-09 18:07:21.617718] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:19:13.912 [2024-12-09 18:07:21.617728] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 4492 doesn't match qid 00:19:13.912 [2024-12-09 18:07:21.617742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32708 cdw0:b2208730 sqhd:e880 p:0 m:0 dnr:0 00:19:13.912 [2024-12-09 18:07:21.617749] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 4492 doesn't match qid 00:19:13.912 [2024-12-09 18:07:21.617756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32708 cdw0:b2208730 sqhd:e880 p:0 m:0 dnr:0 00:19:13.912 [2024-12-09 18:07:21.617763] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 4492 doesn't match qid 00:19:13.912 [2024-12-09 18:07:21.617770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32708 cdw0:b2208730 sqhd:e880 p:0 m:0 dnr:0 00:19:13.912 [2024-12-09 18:07:21.617777] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 4492 doesn't match qid 00:19:13.912 [2024-12-09 18:07:21.617784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32708 cdw0:b2208730 sqhd:e880 p:0 m:0 dnr:0 00:19:13.912 [2024-12-09 18:07:21.617793] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x181d00 00:19:13.912 [2024-12-09 18:07:21.617800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.912 [2024-12-09 18:07:21.617815] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.912 [2024-12-09 18:07:21.617821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:19:13.912 [2024-12-09 18:07:21.617829] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.912 [2024-12-09 18:07:21.617837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.912 [2024-12-09 18:07:21.617843] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x181d00 00:19:13.912 [2024-12-09 18:07:21.617859] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.912 [2024-12-09 18:07:21.617865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:13.912 [2024-12-09 18:07:21.617871] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:19:13.912 [2024-12-09 18:07:21.617877] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:19:13.912 [2024-12-09 18:07:21.617883] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x181d00 00:19:13.912 [2024-12-09 18:07:21.617892] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.912 [2024-12-09 18:07:21.617900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.912 [2024-12-09 18:07:21.617923] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.912 [2024-12-09 18:07:21.617930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:13.912 [2024-12-09 18:07:21.617936] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x181d00 00:19:13.912 [2024-12-09 18:07:21.617945] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.912 [2024-12-09 18:07:21.617957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.912 [2024-12-09 18:07:21.617975] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.912 [2024-12-09 18:07:21.617981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:13.912 [2024-12-09 18:07:21.617987] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x181d00 00:19:13.912 [2024-12-09 18:07:21.617996] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.913 [2024-12-09 18:07:21.618019] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.913 [2024-12-09 18:07:21.618025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:13.913 [2024-12-09 18:07:21.618031] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618040] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.913 [2024-12-09 18:07:21.618068] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.913 [2024-12-09 18:07:21.618073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:13.913 [2024-12-09 18:07:21.618081] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618090] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.913 [2024-12-09 18:07:21.618114] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.913 [2024-12-09 18:07:21.618120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:13.913 [2024-12-09 18:07:21.618127] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618136] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.913 [2024-12-09 18:07:21.618167] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.913 [2024-12-09 18:07:21.618173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:13.913 [2024-12-09 18:07:21.618179] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618188] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.913 [2024-12-09 18:07:21.618213] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.913 [2024-12-09 18:07:21.618219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:13.913 [2024-12-09 18:07:21.618225] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618234] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.913 [2024-12-09 18:07:21.618257] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.913 [2024-12-09 18:07:21.618263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:13.913 [2024-12-09 18:07:21.618270] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618278] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.913 [2024-12-09 18:07:21.618304] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.913 [2024-12-09 18:07:21.618310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:13.913 [2024-12-09 18:07:21.618316] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618325] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.913 [2024-12-09 18:07:21.618348] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.913 [2024-12-09 18:07:21.618353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:13.913 [2024-12-09 18:07:21.618361] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618370] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.913 [2024-12-09 18:07:21.618397] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.913 [2024-12-09 18:07:21.618403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:13.913 [2024-12-09 18:07:21.618409] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618418] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.913 [2024-12-09 18:07:21.618447] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.913 [2024-12-09 18:07:21.618452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:19:13.913 [2024-12-09 18:07:21.618459] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618467] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.913 [2024-12-09 18:07:21.618491] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.913 [2024-12-09 18:07:21.618496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:13.913 [2024-12-09 18:07:21.618502] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618511] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.913 [2024-12-09 18:07:21.618538] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.913 [2024-12-09 18:07:21.618544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:19:13.913 [2024-12-09 18:07:21.618550] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618559] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.913 [2024-12-09 18:07:21.618582] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.913 [2024-12-09 18:07:21.618588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:19:13.913 [2024-12-09 18:07:21.618594] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618603] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.913 [2024-12-09 18:07:21.618626] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.913 [2024-12-09 18:07:21.618633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:19:13.913 [2024-12-09 18:07:21.618639] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618648] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.913 [2024-12-09 18:07:21.618656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.914 [2024-12-09 18:07:21.618673] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.914 [2024-12-09 18:07:21.618679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:19:13.914 [2024-12-09 18:07:21.618685] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.618694] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.618702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.914 [2024-12-09 18:07:21.618717] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.914 [2024-12-09 18:07:21.618723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:19:13.914 [2024-12-09 18:07:21.618729] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.618738] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.618746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.914 [2024-12-09 18:07:21.618767] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.914 [2024-12-09 18:07:21.618773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:19:13.914 [2024-12-09 18:07:21.618779] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.618788] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.618795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.914 [2024-12-09 18:07:21.618816] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.914 [2024-12-09 18:07:21.618822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:19:13.914 [2024-12-09 18:07:21.618828] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.618837] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.618845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.914 [2024-12-09 18:07:21.618860] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.914 [2024-12-09 18:07:21.618866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:19:13.914 [2024-12-09 18:07:21.618872] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.618881] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.618889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.914 [2024-12-09 18:07:21.618904] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.914 [2024-12-09 18:07:21.618909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:19:13.914 [2024-12-09 18:07:21.618916] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.618924] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.618932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.914 [2024-12-09 18:07:21.618950] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.914 [2024-12-09 18:07:21.618956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:19:13.914 [2024-12-09 18:07:21.618962] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.618971] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.618978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.914 [2024-12-09 18:07:21.618996] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.914 [2024-12-09 18:07:21.619002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:19:13.914 [2024-12-09 18:07:21.619008] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.619017] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.619024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.914 [2024-12-09 18:07:21.619046] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.914 [2024-12-09 18:07:21.619051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:19:13.914 [2024-12-09 18:07:21.619057] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.619066] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.619074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.914 [2024-12-09 18:07:21.619091] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.914 [2024-12-09 18:07:21.619097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:19:13.914 [2024-12-09 18:07:21.619103] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.619112] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.619120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.914 [2024-12-09 18:07:21.619143] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.914 [2024-12-09 18:07:21.619149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:19:13.914 [2024-12-09 18:07:21.619155] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.619164] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.619172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.914 [2024-12-09 18:07:21.619191] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.914 [2024-12-09 18:07:21.619196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:19:13.914 [2024-12-09 18:07:21.619203] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.619211] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.619219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.914 [2024-12-09 18:07:21.619236] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.914 [2024-12-09 18:07:21.619242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:19:13.914 [2024-12-09 18:07:21.619248] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.619257] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.619265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.914 [2024-12-09 18:07:21.619280] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.914 [2024-12-09 18:07:21.619286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:19:13.914 [2024-12-09 18:07:21.619292] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.619301] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.619309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.914 [2024-12-09 18:07:21.619326] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.914 [2024-12-09 18:07:21.619332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:19:13.914 [2024-12-09 18:07:21.619338] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.619347] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.914 [2024-12-09 18:07:21.619354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.914 [2024-12-09 18:07:21.619374] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.915 [2024-12-09 18:07:21.619379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:19:13.915 [2024-12-09 18:07:21.619386] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619394] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.915 [2024-12-09 18:07:21.619417] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.915 [2024-12-09 18:07:21.619423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:13.915 [2024-12-09 18:07:21.619429] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619438] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.915 [2024-12-09 18:07:21.619464] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.915 [2024-12-09 18:07:21.619470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:13.915 [2024-12-09 18:07:21.619476] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619485] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.915 [2024-12-09 18:07:21.619514] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.915 [2024-12-09 18:07:21.619520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:13.915 [2024-12-09 18:07:21.619526] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619535] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.915 [2024-12-09 18:07:21.619562] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.915 [2024-12-09 18:07:21.619567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:13.915 [2024-12-09 18:07:21.619574] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619582] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.915 [2024-12-09 18:07:21.619613] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.915 [2024-12-09 18:07:21.619619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:13.915 [2024-12-09 18:07:21.619625] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619634] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.915 [2024-12-09 18:07:21.619663] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.915 [2024-12-09 18:07:21.619668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:13.915 [2024-12-09 18:07:21.619675] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619683] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.915 [2024-12-09 18:07:21.619707] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.915 [2024-12-09 18:07:21.619712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:13.915 [2024-12-09 18:07:21.619719] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619727] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.915 [2024-12-09 18:07:21.619759] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.915 [2024-12-09 18:07:21.619765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:13.915 [2024-12-09 18:07:21.619771] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619780] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.915 [2024-12-09 18:07:21.619811] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.915 [2024-12-09 18:07:21.619816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:13.915 [2024-12-09 18:07:21.619823] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619831] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.915 [2024-12-09 18:07:21.619862] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.915 [2024-12-09 18:07:21.619868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:13.915 [2024-12-09 18:07:21.619874] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619883] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.915 [2024-12-09 18:07:21.619910] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.915 [2024-12-09 18:07:21.619915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:13.915 [2024-12-09 18:07:21.619922] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619931] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.619938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.915 [2024-12-09 18:07:21.623951] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.915 [2024-12-09 18:07:21.623959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:19:13.915 [2024-12-09 18:07:21.623965] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.623974] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.623982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:13.915 [2024-12-09 18:07:21.623998] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:13.915 [2024-12-09 18:07:21.624004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0007 p:0 m:0 dnr:0 00:19:13.915 [2024-12-09 18:07:21.624010] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x181d00 00:19:13.915 [2024-12-09 18:07:21.624017] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:19:13.915 Used: 0% 00:19:13.915 Data Units Read: 0 00:19:13.915 Data Units Written: 0 00:19:13.915 Host Read Commands: 0 00:19:13.915 Host Write Commands: 0 00:19:13.915 Controller Busy Time: 0 minutes 00:19:13.915 Power Cycles: 0 00:19:13.915 Power On Hours: 0 hours 00:19:13.915 Unsafe Shutdowns: 0 00:19:13.915 Unrecoverable Media Errors: 0 00:19:13.915 Lifetime Error Log Entries: 0 00:19:13.915 Warning Temperature Time: 0 minutes 00:19:13.915 Critical Temperature Time: 0 minutes 00:19:13.915 00:19:13.915 Number of Queues 00:19:13.915 ================ 00:19:13.915 Number of I/O Submission Queues: 127 00:19:13.915 Number of I/O Completion Queues: 127 00:19:13.915 00:19:13.915 Active Namespaces 00:19:13.915 ================= 00:19:13.915 Namespace ID:1 00:19:13.915 Error Recovery Timeout: Unlimited 00:19:13.915 Command Set Identifier: NVM (00h) 00:19:13.915 Deallocate: Supported 00:19:13.915 Deallocated/Unwritten Error: Not Supported 00:19:13.915 Deallocated Read Value: Unknown 00:19:13.915 Deallocate in Write Zeroes: Not Supported 00:19:13.915 Deallocated Guard Field: 0xFFFF 00:19:13.915 Flush: Supported 00:19:13.915 Reservation: Supported 00:19:13.915 Namespace Sharing Capabilities: Multiple Controllers 00:19:13.915 Size (in LBAs): 131072 (0GiB) 00:19:13.915 Capacity (in LBAs): 131072 (0GiB) 00:19:13.915 Utilization (in LBAs): 131072 (0GiB) 00:19:13.915 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:13.915 EUI64: ABCDEF0123456789 00:19:13.915 UUID: 34f57599-16d6-4213-bc78-0542fa75ca1f 00:19:13.915 Thin Provisioning: Not Supported 00:19:13.915 Per-NS Atomic Units: Yes 00:19:13.915 Atomic Boundary Size (Normal): 0 00:19:13.915 Atomic Boundary Size (PFail): 0 00:19:13.915 Atomic Boundary Offset: 0 00:19:13.915 Maximum Single Source Range Length: 65535 00:19:13.915 Maximum Copy Length: 65535 00:19:13.915 Maximum Source Range Count: 1 00:19:13.915 NGUID/EUI64 Never Reused: No 00:19:13.915 Namespace Write Protected: No 00:19:13.915 Number of LBA Formats: 1 00:19:13.915 Current LBA Format: LBA Format #00 00:19:13.915 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:13.916 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:13.916 rmmod nvme_rdma 00:19:13.916 rmmod nvme_fabrics 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2394894 ']' 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2394894 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2394894 ']' 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2394894 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2394894 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2394894' 00:19:13.916 killing process with pid 2394894 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2394894 00:19:13.916 18:07:21 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2394894 00:19:14.175 18:07:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:14.175 18:07:22 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:14.175 00:19:14.175 real 0m9.450s 00:19:14.175 user 0m9.046s 00:19:14.175 sys 0m6.126s 00:19:14.175 18:07:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.175 18:07:22 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:14.175 ************************************ 00:19:14.175 END TEST nvmf_identify 00:19:14.175 ************************************ 00:19:14.175 18:07:22 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:19:14.175 18:07:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:14.175 18:07:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.175 18:07:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.175 ************************************ 00:19:14.175 START TEST nvmf_perf 00:19:14.175 ************************************ 00:19:14.175 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:19:14.436 * Looking for test storage... 00:19:14.436 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:14.436 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:14.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.437 --rc genhtml_branch_coverage=1 00:19:14.437 --rc genhtml_function_coverage=1 00:19:14.437 --rc genhtml_legend=1 00:19:14.437 --rc geninfo_all_blocks=1 00:19:14.437 --rc geninfo_unexecuted_blocks=1 00:19:14.437 00:19:14.437 ' 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:14.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.437 --rc genhtml_branch_coverage=1 00:19:14.437 --rc genhtml_function_coverage=1 00:19:14.437 --rc genhtml_legend=1 00:19:14.437 --rc geninfo_all_blocks=1 00:19:14.437 --rc geninfo_unexecuted_blocks=1 00:19:14.437 00:19:14.437 ' 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:14.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.437 --rc genhtml_branch_coverage=1 00:19:14.437 --rc genhtml_function_coverage=1 00:19:14.437 --rc genhtml_legend=1 00:19:14.437 --rc geninfo_all_blocks=1 00:19:14.437 --rc geninfo_unexecuted_blocks=1 00:19:14.437 00:19:14.437 ' 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:14.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.437 --rc genhtml_branch_coverage=1 00:19:14.437 --rc genhtml_function_coverage=1 00:19:14.437 --rc genhtml_legend=1 00:19:14.437 --rc geninfo_all_blocks=1 00:19:14.437 --rc geninfo_unexecuted_blocks=1 00:19:14.437 00:19:14.437 ' 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:14.437 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.437 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:14.438 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:14.438 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:14.438 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.438 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:14.438 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.438 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:14.438 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:14.438 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:14.438 18:07:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:22.563 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:22.563 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:22.563 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:22.563 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:22.563 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:22.564 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:22.564 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:22.564 altname enp217s0f0np0 00:19:22.564 altname ens818f0np0 00:19:22.564 inet 192.168.100.8/24 scope global mlx_0_0 00:19:22.564 valid_lft forever preferred_lft forever 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:22.564 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:22.564 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:22.564 altname enp217s0f1np1 00:19:22.564 altname ens818f1np1 00:19:22.564 inet 192.168.100.9/24 scope global mlx_0_1 00:19:22.564 valid_lft forever preferred_lft forever 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:22.564 192.168.100.9' 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:22.564 192.168.100.9' 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:22.564 192.168.100.9' 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2398607 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2398607 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2398607 ']' 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.564 18:07:29 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:22.564 [2024-12-09 18:07:29.683113] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:19:22.564 [2024-12-09 18:07:29.683163] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.564 [2024-12-09 18:07:29.773404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:22.564 [2024-12-09 18:07:29.813888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.564 [2024-12-09 18:07:29.813928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.564 [2024-12-09 18:07:29.813938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.564 [2024-12-09 18:07:29.813950] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.564 [2024-12-09 18:07:29.813957] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.564 [2024-12-09 18:07:29.815708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.564 [2024-12-09 18:07:29.815818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.564 [2024-12-09 18:07:29.815930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.564 [2024-12-09 18:07:29.815930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:22.564 18:07:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.565 18:07:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:19:22.565 18:07:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:22.565 18:07:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:22.565 18:07:30 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:22.823 18:07:30 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.823 18:07:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:19:22.823 18:07:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:19:26.106 18:07:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:19:26.106 18:07:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:26.106 18:07:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:19:26.106 18:07:33 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:26.106 18:07:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:26.106 18:07:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:19:26.106 18:07:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:26.106 18:07:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:19:26.106 18:07:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:19:26.365 [2024-12-09 18:07:34.238812] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:19:26.365 [2024-12-09 18:07:34.260200] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20a23a0/0x1f78040) succeed. 00:19:26.365 [2024-12-09 18:07:34.269609] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20a38a0/0x1ff7bc0) succeed. 00:19:26.623 18:07:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:26.623 18:07:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:26.623 18:07:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:26.881 18:07:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:26.881 18:07:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:27.139 18:07:34 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:27.397 [2024-12-09 18:07:35.145934] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:27.397 18:07:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:27.654 18:07:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:19:27.654 18:07:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:19:27.654 18:07:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:27.654 18:07:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:19:29.027 Initializing NVMe Controllers 00:19:29.027 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:19:29.027 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:19:29.027 Initialization complete. Launching workers. 00:19:29.027 ======================================================== 00:19:29.027 Latency(us) 00:19:29.027 Device Information : IOPS MiB/s Average min max 00:19:29.027 PCIE (0000:d8:00.0) NSID 1 from core 0: 101411.09 396.14 315.24 24.68 6203.03 00:19:29.027 ======================================================== 00:19:29.027 Total : 101411.09 396.14 315.24 24.68 6203.03 00:19:29.027 00:19:29.028 18:07:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:19:32.310 Initializing NVMe Controllers 00:19:32.310 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:32.310 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:32.310 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:32.310 Initialization complete. Launching workers. 00:19:32.310 ======================================================== 00:19:32.310 Latency(us) 00:19:32.310 Device Information : IOPS MiB/s Average min max 00:19:32.310 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6606.84 25.81 151.01 46.94 4088.24 00:19:32.310 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5127.11 20.03 194.65 66.70 4100.68 00:19:32.310 ======================================================== 00:19:32.310 Total : 11733.95 45.84 170.08 46.94 4100.68 00:19:32.310 00:19:32.310 18:07:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:19:35.592 Initializing NVMe Controllers 00:19:35.592 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:35.592 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:35.592 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:35.592 Initialization complete. Launching workers. 00:19:35.592 ======================================================== 00:19:35.592 Latency(us) 00:19:35.592 Device Information : IOPS MiB/s Average min max 00:19:35.592 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18301.98 71.49 1741.22 468.73 6274.96 00:19:35.592 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7955.24 5829.94 9152.83 00:19:35.592 ======================================================== 00:19:35.592 Total : 22333.98 87.24 2863.05 468.73 9152.83 00:19:35.592 00:19:35.592 18:07:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:19:35.592 18:07:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:19:40.855 Initializing NVMe Controllers 00:19:40.855 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:40.855 Controller IO queue size 128, less than required. 00:19:40.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:40.855 Controller IO queue size 128, less than required. 00:19:40.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:40.855 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:40.855 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:40.855 Initialization complete. Launching workers. 00:19:40.855 ======================================================== 00:19:40.855 Latency(us) 00:19:40.855 Device Information : IOPS MiB/s Average min max 00:19:40.855 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3983.14 995.79 32345.96 14372.54 85680.49 00:19:40.855 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4007.10 1001.78 31481.24 13796.50 56833.74 00:19:40.855 ======================================================== 00:19:40.855 Total : 7990.24 1997.56 31912.30 13796.50 85680.49 00:19:40.855 00:19:40.855 18:07:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:19:40.855 No valid NVMe controllers or AIO or URING devices found 00:19:40.855 Initializing NVMe Controllers 00:19:40.855 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:40.855 Controller IO queue size 128, less than required. 00:19:40.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:40.855 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:19:40.855 Controller IO queue size 128, less than required. 00:19:40.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:40.855 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:19:40.855 WARNING: Some requested NVMe devices were skipped 00:19:40.855 18:07:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:19:45.040 Initializing NVMe Controllers 00:19:45.040 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:45.040 Controller IO queue size 128, less than required. 00:19:45.040 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:45.040 Controller IO queue size 128, less than required. 00:19:45.040 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:45.040 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:45.040 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:45.040 Initialization complete. Launching workers. 00:19:45.040 00:19:45.040 ==================== 00:19:45.040 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:19:45.040 RDMA transport: 00:19:45.040 dev name: mlx5_0 00:19:45.040 polls: 408454 00:19:45.040 idle_polls: 404823 00:19:45.040 completions: 45254 00:19:45.040 queued_requests: 1 00:19:45.040 total_send_wrs: 22627 00:19:45.040 send_doorbell_updates: 3381 00:19:45.040 total_recv_wrs: 22754 00:19:45.040 recv_doorbell_updates: 3384 00:19:45.040 --------------------------------- 00:19:45.040 00:19:45.040 ==================== 00:19:45.040 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:19:45.040 RDMA transport: 00:19:45.040 dev name: mlx5_0 00:19:45.040 polls: 412404 00:19:45.040 idle_polls: 412116 00:19:45.040 completions: 20102 00:19:45.040 queued_requests: 1 00:19:45.040 total_send_wrs: 10051 00:19:45.040 send_doorbell_updates: 258 00:19:45.040 total_recv_wrs: 10178 00:19:45.040 recv_doorbell_updates: 259 00:19:45.040 --------------------------------- 00:19:45.040 ======================================================== 00:19:45.040 Latency(us) 00:19:45.040 Device Information : IOPS MiB/s Average min max 00:19:45.040 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5646.45 1411.61 22766.00 11269.29 70108.63 00:19:45.040 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2508.04 627.01 50946.94 27971.52 80215.10 00:19:45.040 ======================================================== 00:19:45.040 Total : 8154.49 2038.62 31433.47 11269.29 80215.10 00:19:45.040 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:45.040 rmmod nvme_rdma 00:19:45.040 rmmod nvme_fabrics 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2398607 ']' 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2398607 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2398607 ']' 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2398607 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.040 18:07:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2398607 00:19:45.298 18:07:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:45.298 18:07:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:45.298 18:07:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2398607' 00:19:45.298 killing process with pid 2398607 00:19:45.298 18:07:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2398607 00:19:45.298 18:07:53 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2398607 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:47.828 00:19:47.828 real 0m33.281s 00:19:47.828 user 1m44.865s 00:19:47.828 sys 0m7.071s 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:47.828 ************************************ 00:19:47.828 END TEST nvmf_perf 00:19:47.828 ************************************ 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.828 ************************************ 00:19:47.828 START TEST nvmf_fio_host 00:19:47.828 ************************************ 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:19:47.828 * Looking for test storage... 00:19:47.828 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:47.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.828 --rc genhtml_branch_coverage=1 00:19:47.828 --rc genhtml_function_coverage=1 00:19:47.828 --rc genhtml_legend=1 00:19:47.828 --rc geninfo_all_blocks=1 00:19:47.828 --rc geninfo_unexecuted_blocks=1 00:19:47.828 00:19:47.828 ' 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:47.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.828 --rc genhtml_branch_coverage=1 00:19:47.828 --rc genhtml_function_coverage=1 00:19:47.828 --rc genhtml_legend=1 00:19:47.828 --rc geninfo_all_blocks=1 00:19:47.828 --rc geninfo_unexecuted_blocks=1 00:19:47.828 00:19:47.828 ' 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:47.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.828 --rc genhtml_branch_coverage=1 00:19:47.828 --rc genhtml_function_coverage=1 00:19:47.828 --rc genhtml_legend=1 00:19:47.828 --rc geninfo_all_blocks=1 00:19:47.828 --rc geninfo_unexecuted_blocks=1 00:19:47.828 00:19:47.828 ' 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:47.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.828 --rc genhtml_branch_coverage=1 00:19:47.828 --rc genhtml_function_coverage=1 00:19:47.828 --rc genhtml_legend=1 00:19:47.828 --rc geninfo_all_blocks=1 00:19:47.828 --rc geninfo_unexecuted_blocks=1 00:19:47.828 00:19:47.828 ' 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.828 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:47.829 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:19:47.829 18:07:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:56.018 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:56.018 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:56.018 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:56.018 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:56.018 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:56.018 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:56.018 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:56.018 altname enp217s0f0np0 00:19:56.018 altname ens818f0np0 00:19:56.018 inet 192.168.100.8/24 scope global mlx_0_0 00:19:56.019 valid_lft forever preferred_lft forever 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:56.019 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:56.019 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:56.019 altname enp217s0f1np1 00:19:56.019 altname ens818f1np1 00:19:56.019 inet 192.168.100.9/24 scope global mlx_0_1 00:19:56.019 valid_lft forever preferred_lft forever 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:56.019 192.168.100.9' 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:56.019 192.168.100.9' 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:56.019 192.168.100.9' 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:56.019 18:08:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:56.019 18:08:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:19:56.019 18:08:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:19:56.019 18:08:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:56.019 18:08:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.019 18:08:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2406363 00:19:56.019 18:08:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:56.019 18:08:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2406363 00:19:56.019 18:08:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2406363 ']' 00:19:56.019 18:08:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:56.019 18:08:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.019 18:08:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.019 18:08:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.019 18:08:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.019 18:08:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.019 [2024-12-09 18:08:03.076188] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:19:56.019 [2024-12-09 18:08:03.076243] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.019 [2024-12-09 18:08:03.167714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:56.019 [2024-12-09 18:08:03.207740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.019 [2024-12-09 18:08:03.207780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.019 [2024-12-09 18:08:03.207789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.019 [2024-12-09 18:08:03.207797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.019 [2024-12-09 18:08:03.207820] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.019 [2024-12-09 18:08:03.209376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.019 [2024-12-09 18:08:03.209478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.019 [2024-12-09 18:08:03.209592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.019 [2024-12-09 18:08:03.209593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:56.019 18:08:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.019 18:08:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:19:56.019 18:08:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:56.277 [2024-12-09 18:08:04.098964] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1664980/0x1668e70) succeed. 00:19:56.278 [2024-12-09 18:08:04.108216] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1666010/0x16aa510) succeed. 00:19:56.536 18:08:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:19:56.536 18:08:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:56.536 18:08:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.536 18:08:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:56.536 Malloc1 00:19:56.794 18:08:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:56.794 18:08:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:57.052 18:08:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:57.310 [2024-12-09 18:08:05.099132] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:57.310 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:19:57.568 18:08:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:19:57.826 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:57.826 fio-3.35 00:19:57.826 Starting 1 thread 00:20:00.382 00:20:00.382 test: (groupid=0, jobs=1): err= 0: pid=2406925: Mon Dec 9 18:08:07 2024 00:20:00.382 read: IOPS=17.6k, BW=68.6MiB/s (71.9MB/s)(137MiB/2003msec) 00:20:00.382 slat (nsec): min=1345, max=37288, avg=1472.22, stdev=414.78 00:20:00.382 clat (usec): min=2250, max=6567, avg=3622.00, stdev=89.24 00:20:00.382 lat (usec): min=2264, max=6569, avg=3623.47, stdev=89.13 00:20:00.382 clat percentiles (usec): 00:20:00.382 | 1.00th=[ 3589], 5.00th=[ 3589], 10.00th=[ 3589], 20.00th=[ 3589], 00:20:00.382 | 30.00th=[ 3621], 40.00th=[ 3621], 50.00th=[ 3621], 60.00th=[ 3621], 00:20:00.382 | 70.00th=[ 3621], 80.00th=[ 3621], 90.00th=[ 3654], 95.00th=[ 3654], 00:20:00.382 | 99.00th=[ 3687], 99.50th=[ 3785], 99.90th=[ 4752], 99.95th=[ 6063], 00:20:00.382 | 99.99th=[ 6521] 00:20:00.382 bw ( KiB/s): min=68936, max=70768, per=99.97%, avg=70202.00, stdev=860.10, samples=4 00:20:00.382 iops : min=17234, max=17692, avg=17550.50, stdev=215.02, samples=4 00:20:00.382 write: IOPS=17.6k, BW=68.6MiB/s (72.0MB/s)(137MiB/2003msec); 0 zone resets 00:20:00.382 slat (nsec): min=1367, max=19002, avg=1553.89, stdev=442.66 00:20:00.382 clat (usec): min=2256, max=6561, avg=3619.82, stdev=80.96 00:20:00.382 lat (usec): min=2267, max=6563, avg=3621.37, stdev=80.86 00:20:00.382 clat percentiles (usec): 00:20:00.382 | 1.00th=[ 3556], 5.00th=[ 3589], 10.00th=[ 3589], 20.00th=[ 3589], 00:20:00.382 | 30.00th=[ 3621], 40.00th=[ 3621], 50.00th=[ 3621], 60.00th=[ 3621], 00:20:00.382 | 70.00th=[ 3621], 80.00th=[ 3621], 90.00th=[ 3654], 95.00th=[ 3654], 00:20:00.382 | 99.00th=[ 3687], 99.50th=[ 3785], 99.90th=[ 4359], 99.95th=[ 5604], 00:20:00.382 | 99.99th=[ 6521] 00:20:00.382 bw ( KiB/s): min=68944, max=70808, per=99.97%, avg=70254.00, stdev=877.48, samples=4 00:20:00.382 iops : min=17236, max=17702, avg=17563.50, stdev=219.37, samples=4 00:20:00.382 lat (msec) : 4=99.84%, 10=0.16% 00:20:00.382 cpu : usr=99.45%, sys=0.10%, ctx=15, majf=0, minf=3 00:20:00.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:00.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:00.382 issued rwts: total=35164,35191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:00.382 00:20:00.382 Run status group 0 (all jobs): 00:20:00.382 READ: bw=68.6MiB/s (71.9MB/s), 68.6MiB/s-68.6MiB/s (71.9MB/s-71.9MB/s), io=137MiB (144MB), run=2003-2003msec 00:20:00.382 WRITE: bw=68.6MiB/s (72.0MB/s), 68.6MiB/s-68.6MiB/s (72.0MB/s-72.0MB/s), io=137MiB (144MB), run=2003-2003msec 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:00.382 18:08:08 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:20:00.642 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:00.642 fio-3.35 00:20:00.642 Starting 1 thread 00:20:03.173 00:20:03.173 test: (groupid=0, jobs=1): err= 0: pid=2407461: Mon Dec 9 18:08:10 2024 00:20:03.173 read: IOPS=14.2k, BW=222MiB/s (233MB/s)(439MiB/1979msec) 00:20:03.173 slat (nsec): min=2220, max=51201, avg=2553.43, stdev=961.44 00:20:03.173 clat (usec): min=528, max=8529, avg=1553.59, stdev=1211.77 00:20:03.173 lat (usec): min=530, max=8549, avg=1556.14, stdev=1212.11 00:20:03.173 clat percentiles (usec): 00:20:03.173 | 1.00th=[ 701], 5.00th=[ 791], 10.00th=[ 848], 20.00th=[ 930], 00:20:03.173 | 30.00th=[ 996], 40.00th=[ 1074], 50.00th=[ 1172], 60.00th=[ 1287], 00:20:03.173 | 70.00th=[ 1401], 80.00th=[ 1565], 90.00th=[ 2769], 95.00th=[ 4948], 00:20:03.173 | 99.00th=[ 6390], 99.50th=[ 6915], 99.90th=[ 7504], 99.95th=[ 7701], 00:20:03.173 | 99.99th=[ 8455] 00:20:03.173 bw ( KiB/s): min=111712, max=115328, per=49.87%, avg=113328.00, stdev=1569.09, samples=4 00:20:03.173 iops : min= 6982, max= 7208, avg=7083.00, stdev=98.07, samples=4 00:20:03.173 write: IOPS=8050, BW=126MiB/s (132MB/s)(230MiB/1829msec); 0 zone resets 00:20:03.173 slat (usec): min=26, max=143, avg=28.72, stdev= 5.51 00:20:03.173 clat (usec): min=4572, max=20039, avg=12929.43, stdev=1940.93 00:20:03.173 lat (usec): min=4601, max=20066, avg=12958.15, stdev=1940.60 00:20:03.173 clat percentiles (usec): 00:20:03.173 | 1.00th=[ 7177], 5.00th=[10028], 10.00th=[10683], 20.00th=[11469], 00:20:03.173 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12911], 60.00th=[13435], 00:20:03.173 | 70.00th=[13960], 80.00th=[14484], 90.00th=[15270], 95.00th=[15926], 00:20:03.173 | 99.00th=[17957], 99.50th=[18220], 99.90th=[19268], 99.95th=[19530], 00:20:03.173 | 99.99th=[19792] 00:20:03.173 bw ( KiB/s): min=113024, max=118944, per=90.74%, avg=116872.00, stdev=2631.71, samples=4 00:20:03.173 iops : min= 7064, max= 7434, avg=7304.50, stdev=164.48, samples=4 00:20:03.173 lat (usec) : 750=1.76%, 1000=18.53% 00:20:03.173 lat (msec) : 2=37.85%, 4=2.28%, 10=6.77%, 20=32.81%, 50=0.01% 00:20:03.173 cpu : usr=96.51%, sys=1.85%, ctx=186, majf=0, minf=3 00:20:03.173 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:20:03.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:03.173 issued rwts: total=28106,14724,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.173 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:03.173 00:20:03.173 Run status group 0 (all jobs): 00:20:03.173 READ: bw=222MiB/s (233MB/s), 222MiB/s-222MiB/s (233MB/s-233MB/s), io=439MiB (460MB), run=1979-1979msec 00:20:03.173 WRITE: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=230MiB (241MB), run=1829-1829msec 00:20:03.173 18:08:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:03.173 18:08:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:03.173 18:08:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:03.173 18:08:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:03.173 18:08:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:03.173 18:08:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:03.173 18:08:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:20:03.173 18:08:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:03.173 18:08:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:03.173 18:08:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:20:03.173 18:08:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:03.173 18:08:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:03.173 rmmod nvme_rdma 00:20:03.173 rmmod nvme_fabrics 00:20:03.173 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:03.173 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:20:03.173 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:20:03.173 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2406363 ']' 00:20:03.173 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2406363 00:20:03.173 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2406363 ']' 00:20:03.173 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2406363 00:20:03.173 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:20:03.173 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.173 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2406363 00:20:03.173 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:03.173 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:03.173 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2406363' 00:20:03.173 killing process with pid 2406363 00:20:03.173 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2406363 00:20:03.173 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2406363 00:20:03.433 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:03.433 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:03.433 00:20:03.433 real 0m15.859s 00:20:03.433 user 0m57.163s 00:20:03.433 sys 0m6.709s 00:20:03.433 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.433 18:08:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.433 ************************************ 00:20:03.433 END TEST nvmf_fio_host 00:20:03.433 ************************************ 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.692 ************************************ 00:20:03.692 START TEST nvmf_failover 00:20:03.692 ************************************ 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:20:03.692 * Looking for test storage... 00:20:03.692 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:03.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.692 --rc genhtml_branch_coverage=1 00:20:03.692 --rc genhtml_function_coverage=1 00:20:03.692 --rc genhtml_legend=1 00:20:03.692 --rc geninfo_all_blocks=1 00:20:03.692 --rc geninfo_unexecuted_blocks=1 00:20:03.692 00:20:03.692 ' 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:03.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.692 --rc genhtml_branch_coverage=1 00:20:03.692 --rc genhtml_function_coverage=1 00:20:03.692 --rc genhtml_legend=1 00:20:03.692 --rc geninfo_all_blocks=1 00:20:03.692 --rc geninfo_unexecuted_blocks=1 00:20:03.692 00:20:03.692 ' 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:03.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.692 --rc genhtml_branch_coverage=1 00:20:03.692 --rc genhtml_function_coverage=1 00:20:03.692 --rc genhtml_legend=1 00:20:03.692 --rc geninfo_all_blocks=1 00:20:03.692 --rc geninfo_unexecuted_blocks=1 00:20:03.692 00:20:03.692 ' 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:03.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.692 --rc genhtml_branch_coverage=1 00:20:03.692 --rc genhtml_function_coverage=1 00:20:03.692 --rc genhtml_legend=1 00:20:03.692 --rc geninfo_all_blocks=1 00:20:03.692 --rc geninfo_unexecuted_blocks=1 00:20:03.692 00:20:03.692 ' 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.692 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:03.951 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:03.951 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:20:03.952 18:08:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:12.076 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:12.076 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:12.076 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:12.076 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:12.076 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:12.077 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:12.077 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:12.077 altname enp217s0f0np0 00:20:12.077 altname ens818f0np0 00:20:12.077 inet 192.168.100.8/24 scope global mlx_0_0 00:20:12.077 valid_lft forever preferred_lft forever 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:12.077 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:12.077 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:12.077 altname enp217s0f1np1 00:20:12.077 altname ens818f1np1 00:20:12.077 inet 192.168.100.9/24 scope global mlx_0_1 00:20:12.077 valid_lft forever preferred_lft forever 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:12.077 192.168.100.9' 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:12.077 192.168.100.9' 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:12.077 192.168.100.9' 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:12.077 18:08:18 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:12.077 18:08:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2411407 00:20:12.077 18:08:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2411407 00:20:12.077 18:08:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:12.077 18:08:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2411407 ']' 00:20:12.077 18:08:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.077 18:08:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.077 18:08:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.077 18:08:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.077 18:08:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:12.077 [2024-12-09 18:08:19.054820] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:20:12.077 [2024-12-09 18:08:19.054878] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.077 [2024-12-09 18:08:19.147612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:12.077 [2024-12-09 18:08:19.188294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.077 [2024-12-09 18:08:19.188328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.077 [2024-12-09 18:08:19.188338] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.077 [2024-12-09 18:08:19.188346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.077 [2024-12-09 18:08:19.188353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.077 [2024-12-09 18:08:19.189719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.077 [2024-12-09 18:08:19.189809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:12.077 [2024-12-09 18:08:19.189807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.077 18:08:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.077 18:08:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:20:12.077 18:08:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:12.077 18:08:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:12.077 18:08:19 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:12.077 18:08:19 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.078 18:08:19 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:12.336 [2024-12-09 18:08:20.126204] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7f50c0/0x7f95b0) succeed. 00:20:12.336 [2024-12-09 18:08:20.135386] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7f66b0/0x83ac50) succeed. 00:20:12.336 18:08:20 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:12.595 Malloc0 00:20:12.595 18:08:20 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:12.853 18:08:20 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:13.111 18:08:20 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:13.111 [2024-12-09 18:08:21.026835] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:13.111 18:08:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:20:13.370 [2024-12-09 18:08:21.215209] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:20:13.370 18:08:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:20:13.628 [2024-12-09 18:08:21.403897] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:20:13.628 18:08:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2411719 00:20:13.628 18:08:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:13.628 18:08:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:13.628 18:08:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2411719 /var/tmp/bdevperf.sock 00:20:13.628 18:08:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2411719 ']' 00:20:13.628 18:08:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.628 18:08:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.628 18:08:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.628 18:08:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.628 18:08:21 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:14.561 18:08:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.561 18:08:22 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:20:14.561 18:08:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:14.819 NVMe0n1 00:20:14.819 18:08:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:15.077 00:20:15.077 18:08:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2411983 00:20:15.077 18:08:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:15.077 18:08:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:16.012 18:08:23 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:16.270 18:08:24 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:19.553 18:08:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:19.553 00:20:19.553 18:08:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:20:19.553 18:08:27 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:22.837 18:08:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:22.837 [2024-12-09 18:08:30.666998] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:22.837 18:08:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:23.771 18:08:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:20:24.029 18:08:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2411983 00:20:30.595 { 00:20:30.595 "results": [ 00:20:30.595 { 00:20:30.595 "job": "NVMe0n1", 00:20:30.595 "core_mask": "0x1", 00:20:30.595 "workload": "verify", 00:20:30.595 "status": "finished", 00:20:30.595 "verify_range": { 00:20:30.595 "start": 0, 00:20:30.595 "length": 16384 00:20:30.595 }, 00:20:30.595 "queue_depth": 128, 00:20:30.595 "io_size": 4096, 00:20:30.595 "runtime": 15.005835, 00:20:30.595 "iops": 14387.80314457676, 00:20:30.595 "mibps": 56.202356033502966, 00:20:30.595 "io_failed": 4789, 00:20:30.595 "io_timeout": 0, 00:20:30.595 "avg_latency_us": 8685.732010751733, 00:20:30.595 "min_latency_us": 337.5104, 00:20:30.595 "max_latency_us": 1020054.7328 00:20:30.595 } 00:20:30.595 ], 00:20:30.595 "core_count": 1 00:20:30.595 } 00:20:30.595 18:08:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2411719 00:20:30.595 18:08:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2411719 ']' 00:20:30.595 18:08:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2411719 00:20:30.595 18:08:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:20:30.595 18:08:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.595 18:08:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2411719 00:20:30.595 18:08:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:30.595 18:08:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:30.595 18:08:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2411719' 00:20:30.595 killing process with pid 2411719 00:20:30.596 18:08:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2411719 00:20:30.596 18:08:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2411719 00:20:30.596 18:08:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:30.596 [2024-12-09 18:08:21.477538] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:20:30.596 [2024-12-09 18:08:21.477597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2411719 ] 00:20:30.596 [2024-12-09 18:08:21.566379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.596 [2024-12-09 18:08:21.606979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.596 Running I/O for 15 seconds... 00:20:30.596 18148.00 IOPS, 70.89 MiB/s [2024-12-09T17:08:38.575Z] 9825.50 IOPS, 38.38 MiB/s [2024-12-09T17:08:38.575Z] [2024-12-09 18:08:25.007252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.596 [2024-12-09 18:08:25.007955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x182200 00:20:30.596 [2024-12-09 18:08:25.007965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.007976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.007985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.007995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:26432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:26504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x182200 00:20:30.597 [2024-12-09 18:08:25.008493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.597 [2024-12-09 18:08:25.008514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.597 [2024-12-09 18:08:25.008533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.597 [2024-12-09 18:08:25.008553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.597 [2024-12-09 18:08:25.008573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.597 [2024-12-09 18:08:25.008592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.597 [2024-12-09 18:08:25.008611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.597 [2024-12-09 18:08:25.008631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.597 [2024-12-09 18:08:25.008650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.597 [2024-12-09 18:08:25.008669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.597 [2024-12-09 18:08:25.008689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.597 [2024-12-09 18:08:25.008699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.597 [2024-12-09 18:08:25.008708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.008718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.008726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.008737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.008746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.008756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.008764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.008775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.008783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.008793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.008802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.008812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.008823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.008833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.008841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.008853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.008863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.008874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.008882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.008892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.008901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.008911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.008920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.008930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.008939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.008953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.008962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.008973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.008981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.008992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.598 [2024-12-09 18:08:25.009474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.598 [2024-12-09 18:08:25.009483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:25.009493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:25.009502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:25.009512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:25.009521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:25.009531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:25.009540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:25.009550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:25.009559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:25.009569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:25.009579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:25.009589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:25.009598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:25.009608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:25.009618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:25.009628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:25.009637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:25.009647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:25.009656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:25.009666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:25.009674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:25.009685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:25.009693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:25.009704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:25.009713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:25.009723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:25.009731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:25.009742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:25.009751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:25.009762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:25.009770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:25.011521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.599 [2024-12-09 18:08:25.011536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.599 [2024-12-09 18:08:25.011545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27152 len:8 PRP1 0x0 PRP2 0x0 00:20:30.599 [2024-12-09 18:08:25.011554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:25.011599] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:20:30.599 [2024-12-09 18:08:25.011619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:30.599 [2024-12-09 18:08:25.014405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:30.599 [2024-12-09 18:08:25.028864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:30.599 [2024-12-09 18:08:25.068505] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:30.599 11699.67 IOPS, 45.70 MiB/s [2024-12-09T17:08:38.578Z] 13344.75 IOPS, 52.13 MiB/s [2024-12-09T17:08:38.578Z] 12626.20 IOPS, 49.32 MiB/s [2024-12-09T17:08:38.578Z] [2024-12-09 18:08:28.477687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:124768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:28.477726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:28.477743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:28.477753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:28.477764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:28.477773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:28.477784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:28.477793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:28.477803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:28.477812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:28.477823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.599 [2024-12-09 18:08:28.477832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:28.477842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x182600 00:20:30.599 [2024-12-09 18:08:28.477852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:28.477863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x182600 00:20:30.599 [2024-12-09 18:08:28.477872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:28.477882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x182600 00:20:30.599 [2024-12-09 18:08:28.477892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:28.477902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x182600 00:20:30.599 [2024-12-09 18:08:28.477911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:28.477930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x182600 00:20:30.599 [2024-12-09 18:08:28.477940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:28.477954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x182600 00:20:30.599 [2024-12-09 18:08:28.477963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:28.477974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x182600 00:20:30.599 [2024-12-09 18:08:28.477983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:28.477994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x182600 00:20:30.599 [2024-12-09 18:08:28.478003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:28.478014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x182600 00:20:30.599 [2024-12-09 18:08:28.478023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:28.478033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x182600 00:20:30.599 [2024-12-09 18:08:28.478042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:28.478053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:124320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x182600 00:20:30.599 [2024-12-09 18:08:28.478062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:28.478073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x182600 00:20:30.599 [2024-12-09 18:08:28.478083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.599 [2024-12-09 18:08:28.478093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x182600 00:20:30.600 [2024-12-09 18:08:28.478102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x182600 00:20:30.600 [2024-12-09 18:08:28.478122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x182600 00:20:30.600 [2024-12-09 18:08:28.478141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:124360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x182600 00:20:30.600 [2024-12-09 18:08:28.478163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x182600 00:20:30.600 [2024-12-09 18:08:28.478338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x182600 00:20:30.600 [2024-12-09 18:08:28.478357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x182600 00:20:30.600 [2024-12-09 18:08:28.478377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x182600 00:20:30.600 [2024-12-09 18:08:28.478398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x182600 00:20:30.600 [2024-12-09 18:08:28.478417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x182600 00:20:30.600 [2024-12-09 18:08:28.478436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x182600 00:20:30.600 [2024-12-09 18:08:28.478457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x182600 00:20:30.600 [2024-12-09 18:08:28.478476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.600 [2024-12-09 18:08:28.478726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.600 [2024-12-09 18:08:28.478737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.478745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.478756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.478764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.478775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.478783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.478794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.478803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.478813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.478822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.478832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.478841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.478851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.478861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.478872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.478881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.478891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.478900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.478910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.478919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.478929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.478938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.478952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x182600 00:20:30.601 [2024-12-09 18:08:28.478961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.478972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x182600 00:20:30.601 [2024-12-09 18:08:28.478981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.478991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x182600 00:20:30.601 [2024-12-09 18:08:28.479000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x182600 00:20:30.601 [2024-12-09 18:08:28.479020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x182600 00:20:30.601 [2024-12-09 18:08:28.479040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:124472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x182600 00:20:30.601 [2024-12-09 18:08:28.479059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x182600 00:20:30.601 [2024-12-09 18:08:28.479079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x182600 00:20:30.601 [2024-12-09 18:08:28.479099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.479118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.479138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.479157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.479176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.479195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.479215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.479234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.479253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x182600 00:20:30.601 [2024-12-09 18:08:28.479272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x182600 00:20:30.601 [2024-12-09 18:08:28.479292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:124512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x182600 00:20:30.601 [2024-12-09 18:08:28.479312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x182600 00:20:30.601 [2024-12-09 18:08:28.479333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x182600 00:20:30.601 [2024-12-09 18:08:28.479352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x182600 00:20:30.601 [2024-12-09 18:08:28.479372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x182600 00:20:30.601 [2024-12-09 18:08:28.479391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x182600 00:20:30.601 [2024-12-09 18:08:28.479411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.479430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.479449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.479468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.601 [2024-12-09 18:08:28.479487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.601 [2024-12-09 18:08:28.479498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.602 [2024-12-09 18:08:28.479507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.602 [2024-12-09 18:08:28.479528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.602 [2024-12-09 18:08:28.479547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.602 [2024-12-09 18:08:28.479568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.602 [2024-12-09 18:08:28.479587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.602 [2024-12-09 18:08:28.479606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.602 [2024-12-09 18:08:28.479625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.602 [2024-12-09 18:08:28.479644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.602 [2024-12-09 18:08:28.479663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.602 [2024-12-09 18:08:28.479682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.602 [2024-12-09 18:08:28.479700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.602 [2024-12-09 18:08:28.479721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.479741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.479760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.479779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.479800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.479819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.479839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.479859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.479878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:124624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.479897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.479916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.479935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.479957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:124656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.479976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.479987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.479996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.480006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.480016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.480027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:124680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.480036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.480046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.480055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.480066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.480074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.480084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.480094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.480104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.480113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.480123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.480132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.480142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.480153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.480163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.480172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.480182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.480191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.480201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x182600 00:20:30.602 [2024-12-09 18:08:28.480210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.602 [2024-12-09 18:08:28.482111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.602 [2024-12-09 18:08:28.482125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.602 [2024-12-09 18:08:28.482134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124760 len:8 PRP1 0x0 PRP2 0x0 00:20:30.602 [2024-12-09 18:08:28.482146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:28.482188] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:20:30.603 [2024-12-09 18:08:28.482201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:30.603 [2024-12-09 18:08:28.484976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:30.603 [2024-12-09 18:08:28.499164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:20:30.603 [2024-12-09 18:08:28.538641] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:20:30.603 11702.17 IOPS, 45.71 MiB/s [2024-12-09T17:08:38.582Z] 12656.43 IOPS, 49.44 MiB/s [2024-12-09T17:08:38.582Z] 13363.38 IOPS, 52.20 MiB/s [2024-12-09T17:08:38.582Z] 13780.22 IOPS, 53.83 MiB/s [2024-12-09T17:08:38.582Z] [2024-12-09 18:08:32.876540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x182200 00:20:30.603 [2024-12-09 18:08:32.876578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x182200 00:20:30.603 [2024-12-09 18:08:32.876606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x182200 00:20:30.603 [2024-12-09 18:08:32.876627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.876646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.876666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.876685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.876704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.876723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.876742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.876771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.876790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.876809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.876829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.876848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.876867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.876886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.876905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.876925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.876945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x182200 00:20:30.603 [2024-12-09 18:08:32.876968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.876979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x182200 00:20:30.603 [2024-12-09 18:08:32.876988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.877000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.877009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.877020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.877029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.877039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.877048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.877058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.877067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.877077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.877086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.877096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.877105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.877115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.877124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.877134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.603 [2024-12-09 18:08:32.877143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.877154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x182200 00:20:30.603 [2024-12-09 18:08:32.877163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.877173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x182200 00:20:30.603 [2024-12-09 18:08:32.877182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.877192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x182200 00:20:30.603 [2024-12-09 18:08:32.877201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.877212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x182200 00:20:30.603 [2024-12-09 18:08:32.877220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.877233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x182200 00:20:30.603 [2024-12-09 18:08:32.877242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.877252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x182200 00:20:30.603 [2024-12-09 18:08:32.877260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.603 [2024-12-09 18:08:32.877271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x182200 00:20:30.603 [2024-12-09 18:08:32.877280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x182200 00:20:30.604 [2024-12-09 18:08:32.877299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.604 [2024-12-09 18:08:32.877317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.604 [2024-12-09 18:08:32.877336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.604 [2024-12-09 18:08:32.877356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.604 [2024-12-09 18:08:32.877375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.604 [2024-12-09 18:08:32.877393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.604 [2024-12-09 18:08:32.877413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x182200 00:20:30.604 [2024-12-09 18:08:32.877432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x182200 00:20:30.604 [2024-12-09 18:08:32.877451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x182200 00:20:30.604 [2024-12-09 18:08:32.877472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x182200 00:20:30.604 [2024-12-09 18:08:32.877491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x182200 00:20:30.604 [2024-12-09 18:08:32.877511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x182200 00:20:30.604 [2024-12-09 18:08:32.877530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x182200 00:20:30.604 [2024-12-09 18:08:32.877550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x182200 00:20:30.604 [2024-12-09 18:08:32.877569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x182200 00:20:30.604 [2024-12-09 18:08:32.877588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x182200 00:20:30.604 [2024-12-09 18:08:32.877607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x182200 00:20:30.604 [2024-12-09 18:08:32.877627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x182200 00:20:30.604 [2024-12-09 18:08:32.877646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x182200 00:20:30.604 [2024-12-09 18:08:32.877665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x182200 00:20:30.604 [2024-12-09 18:08:32.877686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x182200 00:20:30.604 [2024-12-09 18:08:32.877704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x182200 00:20:30.604 [2024-12-09 18:08:32.877724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.604 [2024-12-09 18:08:32.877743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.604 [2024-12-09 18:08:32.877762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.604 [2024-12-09 18:08:32.877781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.604 [2024-12-09 18:08:32.877800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.604 [2024-12-09 18:08:32.877819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.604 [2024-12-09 18:08:32.877838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.604 [2024-12-09 18:08:32.877858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.604 [2024-12-09 18:08:32.877877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.604 [2024-12-09 18:08:32.877896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.604 [2024-12-09 18:08:32.877917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.604 [2024-12-09 18:08:32.877927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.604 [2024-12-09 18:08:32.877936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.877950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.877959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.877969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.877978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.877988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.877997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x182200 00:20:30.605 [2024-12-09 18:08:32.878208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x182200 00:20:30.605 [2024-12-09 18:08:32.878227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x182200 00:20:30.605 [2024-12-09 18:08:32.878246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x182200 00:20:30.605 [2024-12-09 18:08:32.878266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x182200 00:20:30.605 [2024-12-09 18:08:32.878285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x182200 00:20:30.605 [2024-12-09 18:08:32.878304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x182200 00:20:30.605 [2024-12-09 18:08:32.878326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x182200 00:20:30.605 [2024-12-09 18:08:32.878345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x182200 00:20:30.605 [2024-12-09 18:08:32.878520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x182200 00:20:30.605 [2024-12-09 18:08:32.878539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x182200 00:20:30.605 [2024-12-09 18:08:32.878559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x182200 00:20:30.605 [2024-12-09 18:08:32.878578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x182200 00:20:30.605 [2024-12-09 18:08:32.878597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x182200 00:20:30.605 [2024-12-09 18:08:32.878618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x182200 00:20:30.605 [2024-12-09 18:08:32.878638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x182200 00:20:30.605 [2024-12-09 18:08:32.878657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.605 [2024-12-09 18:08:32.878687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.605 [2024-12-09 18:08:32.878696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.878706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.606 [2024-12-09 18:08:32.878715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.878725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.606 [2024-12-09 18:08:32.878734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.878744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.606 [2024-12-09 18:08:32.878753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.878763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.606 [2024-12-09 18:08:32.878772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.878782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.606 [2024-12-09 18:08:32.878791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.878801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.606 [2024-12-09 18:08:32.878810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.878820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x182200 00:20:30.606 [2024-12-09 18:08:32.878830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.878840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x182200 00:20:30.606 [2024-12-09 18:08:32.878850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.878861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x182200 00:20:30.606 [2024-12-09 18:08:32.878870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.878880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x182200 00:20:30.606 [2024-12-09 18:08:32.878889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.878899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x182200 00:20:30.606 [2024-12-09 18:08:32.878908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.878919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x182200 00:20:30.606 [2024-12-09 18:08:32.878927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.878938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x182200 00:20:30.606 [2024-12-09 18:08:32.878950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.878961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x182200 00:20:30.606 [2024-12-09 18:08:32.878970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.878981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x182200 00:20:30.606 [2024-12-09 18:08:32.878992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.879002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x182200 00:20:30.606 [2024-12-09 18:08:32.879011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.879022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.606 [2024-12-09 18:08:32.879031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.879041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.606 [2024-12-09 18:08:32.879050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:54d0000 sqhd:7210 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.880991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.606 [2024-12-09 18:08:32.881005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.606 [2024-12-09 18:08:32.881013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100120 len:8 PRP1 0x0 PRP2 0x0 00:20:30.606 [2024-12-09 18:08:32.881026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.606 [2024-12-09 18:08:32.881069] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:20:30.606 [2024-12-09 18:08:32.881081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:30.606 [2024-12-09 18:08:32.883844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:30.606 [2024-12-09 18:08:32.897751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:20:30.606 [2024-12-09 18:08:32.941081] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:20:30.606 12402.20 IOPS, 48.45 MiB/s [2024-12-09T17:08:38.585Z] 12936.45 IOPS, 50.53 MiB/s [2024-12-09T17:08:38.585Z] 13388.58 IOPS, 52.30 MiB/s [2024-12-09T17:08:38.585Z] 13774.00 IOPS, 53.80 MiB/s [2024-12-09T17:08:38.585Z] 14103.21 IOPS, 55.09 MiB/s [2024-12-09T17:08:38.585Z] 14387.73 IOPS, 56.20 MiB/s 00:20:30.606 Latency(us) 00:20:30.606 [2024-12-09T17:08:38.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.606 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:30.606 Verification LBA range: start 0x0 length 0x4000 00:20:30.606 NVMe0n1 : 15.01 14387.80 56.20 319.14 0.00 8685.73 337.51 1020054.73 00:20:30.606 [2024-12-09T17:08:38.585Z] =================================================================================================================== 00:20:30.606 [2024-12-09T17:08:38.585Z] Total : 14387.80 56.20 319.14 0.00 8685.73 337.51 1020054.73 00:20:30.606 Received shutdown signal, test time was about 15.000000 seconds 00:20:30.606 00:20:30.606 Latency(us) 00:20:30.606 [2024-12-09T17:08:38.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.606 [2024-12-09T17:08:38.585Z] =================================================================================================================== 00:20:30.606 [2024-12-09T17:08:38.585Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:30.606 18:08:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:30.606 18:08:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:20:30.606 18:08:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:20:30.606 18:08:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:30.606 18:08:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2414642 00:20:30.606 18:08:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2414642 /var/tmp/bdevperf.sock 00:20:30.606 18:08:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2414642 ']' 00:20:30.606 18:08:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.606 18:08:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.606 18:08:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.606 18:08:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.606 18:08:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:31.172 18:08:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.172 18:08:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:20:31.172 18:08:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:20:31.430 [2024-12-09 18:08:39.291945] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:20:31.430 18:08:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:20:31.688 [2024-12-09 18:08:39.496654] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:20:31.688 18:08:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:31.946 NVMe0n1 00:20:31.946 18:08:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:32.203 00:20:32.203 18:08:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:32.461 00:20:32.461 18:08:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:32.461 18:08:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:20:32.719 18:08:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:32.977 18:08:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:20:36.258 18:08:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:36.258 18:08:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:20:36.258 18:08:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2415537 00:20:36.258 18:08:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:36.258 18:08:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2415537 00:20:37.190 { 00:20:37.190 "results": [ 00:20:37.190 { 00:20:37.190 "job": "NVMe0n1", 00:20:37.190 "core_mask": "0x1", 00:20:37.190 "workload": "verify", 00:20:37.190 "status": "finished", 00:20:37.190 "verify_range": { 00:20:37.190 "start": 0, 00:20:37.190 "length": 16384 00:20:37.190 }, 00:20:37.190 "queue_depth": 128, 00:20:37.190 "io_size": 4096, 00:20:37.190 "runtime": 1.006724, 00:20:37.190 "iops": 18054.600863791864, 00:20:37.190 "mibps": 70.52578462418697, 00:20:37.190 "io_failed": 0, 00:20:37.190 "io_timeout": 0, 00:20:37.190 "avg_latency_us": 7053.339402816901, 00:20:37.190 "min_latency_us": 2621.44, 00:20:37.190 "max_latency_us": 18140.3648 00:20:37.190 } 00:20:37.190 ], 00:20:37.190 "core_count": 1 00:20:37.190 } 00:20:37.190 18:08:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:37.190 [2024-12-09 18:08:38.278358] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:20:37.190 [2024-12-09 18:08:38.278412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2414642 ] 00:20:37.190 [2024-12-09 18:08:38.367660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.190 [2024-12-09 18:08:38.403304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.190 [2024-12-09 18:08:40.680473] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:20:37.190 [2024-12-09 18:08:40.681111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:20:37.190 [2024-12-09 18:08:40.681147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:20:37.190 [2024-12-09 18:08:40.705399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:20:37.190 [2024-12-09 18:08:40.721873] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:20:37.190 Running I/O for 1 seconds... 00:20:37.190 18048.00 IOPS, 70.50 MiB/s 00:20:37.190 Latency(us) 00:20:37.190 [2024-12-09T17:08:45.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.190 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:37.190 Verification LBA range: start 0x0 length 0x4000 00:20:37.190 NVMe0n1 : 1.01 18054.60 70.53 0.00 0.00 7053.34 2621.44 18140.36 00:20:37.190 [2024-12-09T17:08:45.169Z] =================================================================================================================== 00:20:37.190 [2024-12-09T17:08:45.169Z] Total : 18054.60 70.53 0.00 0.00 7053.34 2621.44 18140.36 00:20:37.190 18:08:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:37.190 18:08:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:20:37.447 18:08:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:37.705 18:08:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:37.705 18:08:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:20:37.705 18:08:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:38.048 18:08:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:20:41.399 18:08:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:41.399 18:08:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:20:41.399 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2414642 00:20:41.399 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2414642 ']' 00:20:41.399 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2414642 00:20:41.399 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:20:41.399 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.399 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2414642 00:20:41.399 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:41.399 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:41.399 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2414642' 00:20:41.399 killing process with pid 2414642 00:20:41.399 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2414642 00:20:41.399 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2414642 00:20:41.399 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:20:41.399 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:41.658 rmmod nvme_rdma 00:20:41.658 rmmod nvme_fabrics 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2411407 ']' 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2411407 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2411407 ']' 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2411407 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2411407 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2411407' 00:20:41.658 killing process with pid 2411407 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2411407 00:20:41.658 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2411407 00:20:41.917 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:41.917 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:41.917 00:20:41.917 real 0m38.399s 00:20:41.918 user 2m6.078s 00:20:41.918 sys 0m8.031s 00:20:41.918 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.918 18:08:49 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:41.918 ************************************ 00:20:41.918 END TEST nvmf_failover 00:20:41.918 ************************************ 00:20:42.176 18:08:49 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:20:42.176 18:08:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:42.176 18:08:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.176 18:08:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.176 ************************************ 00:20:42.176 START TEST nvmf_host_discovery 00:20:42.176 ************************************ 00:20:42.177 18:08:49 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:20:42.177 * Looking for test storage... 00:20:42.177 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:42.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.177 --rc genhtml_branch_coverage=1 00:20:42.177 --rc genhtml_function_coverage=1 00:20:42.177 --rc genhtml_legend=1 00:20:42.177 --rc geninfo_all_blocks=1 00:20:42.177 --rc geninfo_unexecuted_blocks=1 00:20:42.177 00:20:42.177 ' 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:42.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.177 --rc genhtml_branch_coverage=1 00:20:42.177 --rc genhtml_function_coverage=1 00:20:42.177 --rc genhtml_legend=1 00:20:42.177 --rc geninfo_all_blocks=1 00:20:42.177 --rc geninfo_unexecuted_blocks=1 00:20:42.177 00:20:42.177 ' 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:42.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.177 --rc genhtml_branch_coverage=1 00:20:42.177 --rc genhtml_function_coverage=1 00:20:42.177 --rc genhtml_legend=1 00:20:42.177 --rc geninfo_all_blocks=1 00:20:42.177 --rc geninfo_unexecuted_blocks=1 00:20:42.177 00:20:42.177 ' 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:42.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.177 --rc genhtml_branch_coverage=1 00:20:42.177 --rc genhtml_function_coverage=1 00:20:42.177 --rc genhtml_legend=1 00:20:42.177 --rc geninfo_all_blocks=1 00:20:42.177 --rc geninfo_unexecuted_blocks=1 00:20:42.177 00:20:42.177 ' 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.177 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.437 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:20:42.437 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:20:42.437 00:20:42.437 real 0m0.231s 00:20:42.437 user 0m0.122s 00:20:42.437 sys 0m0.126s 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:42.437 ************************************ 00:20:42.437 END TEST nvmf_host_discovery 00:20:42.437 ************************************ 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.437 ************************************ 00:20:42.437 START TEST nvmf_host_multipath_status 00:20:42.437 ************************************ 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:20:42.437 * Looking for test storage... 00:20:42.437 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:20:42.437 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:42.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.697 --rc genhtml_branch_coverage=1 00:20:42.697 --rc genhtml_function_coverage=1 00:20:42.697 --rc genhtml_legend=1 00:20:42.697 --rc geninfo_all_blocks=1 00:20:42.697 --rc geninfo_unexecuted_blocks=1 00:20:42.697 00:20:42.697 ' 00:20:42.697 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:42.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.698 --rc genhtml_branch_coverage=1 00:20:42.698 --rc genhtml_function_coverage=1 00:20:42.698 --rc genhtml_legend=1 00:20:42.698 --rc geninfo_all_blocks=1 00:20:42.698 --rc geninfo_unexecuted_blocks=1 00:20:42.698 00:20:42.698 ' 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:42.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.698 --rc genhtml_branch_coverage=1 00:20:42.698 --rc genhtml_function_coverage=1 00:20:42.698 --rc genhtml_legend=1 00:20:42.698 --rc geninfo_all_blocks=1 00:20:42.698 --rc geninfo_unexecuted_blocks=1 00:20:42.698 00:20:42.698 ' 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:42.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.698 --rc genhtml_branch_coverage=1 00:20:42.698 --rc genhtml_function_coverage=1 00:20:42.698 --rc genhtml_legend=1 00:20:42.698 --rc geninfo_all_blocks=1 00:20:42.698 --rc geninfo_unexecuted_blocks=1 00:20:42.698 00:20:42.698 ' 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.698 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:20:42.698 18:08:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:50.821 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:50.821 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:50.821 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.821 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:50.822 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:50.822 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:50.822 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:50.822 altname enp217s0f0np0 00:20:50.822 altname ens818f0np0 00:20:50.822 inet 192.168.100.8/24 scope global mlx_0_0 00:20:50.822 valid_lft forever preferred_lft forever 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:50.822 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:50.822 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:50.822 altname enp217s0f1np1 00:20:50.822 altname ens818f1np1 00:20:50.822 inet 192.168.100.9/24 scope global mlx_0_1 00:20:50.822 valid_lft forever preferred_lft forever 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:50.822 192.168.100.9' 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:50.822 192.168.100.9' 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:50.822 192.168.100.9' 00:20:50.822 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2420018 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2420018 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2420018 ']' 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.823 18:08:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:50.823 [2024-12-09 18:08:57.823223] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:20:50.823 [2024-12-09 18:08:57.823272] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.823 [2024-12-09 18:08:57.912918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:50.823 [2024-12-09 18:08:57.952157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.823 [2024-12-09 18:08:57.952197] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.823 [2024-12-09 18:08:57.952206] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.823 [2024-12-09 18:08:57.952215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.823 [2024-12-09 18:08:57.952237] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.823 [2024-12-09 18:08:57.953520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.823 [2024-12-09 18:08:57.953520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.823 18:08:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.823 18:08:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:20:50.823 18:08:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:50.823 18:08:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:50.823 18:08:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:50.823 18:08:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.823 18:08:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2420018 00:20:50.823 18:08:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:51.080 [2024-12-09 18:08:58.884960] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x171f200/0x17236f0) succeed. 00:20:51.080 [2024-12-09 18:08:58.894033] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1720750/0x1764d90) succeed. 00:20:51.080 18:08:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:51.338 Malloc0 00:20:51.338 18:08:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:51.596 18:08:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:51.596 18:08:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:51.854 [2024-12-09 18:08:59.718145] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:51.854 18:08:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:20:52.112 [2024-12-09 18:08:59.910518] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:20:52.112 18:08:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2420400 00:20:52.112 18:08:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:52.112 18:08:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:52.112 18:08:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2420400 /var/tmp/bdevperf.sock 00:20:52.112 18:08:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2420400 ']' 00:20:52.112 18:08:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:52.112 18:08:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.112 18:08:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:52.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:52.112 18:08:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.112 18:08:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:53.044 18:09:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.045 18:09:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:20:53.045 18:09:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:53.302 18:09:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:53.560 Nvme0n1 00:20:53.560 18:09:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:53.817 Nvme0n1 00:20:53.818 18:09:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:20:53.818 18:09:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:55.719 18:09:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:20:55.719 18:09:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:20:55.978 18:09:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:20:56.236 18:09:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:20:57.171 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:20:57.171 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:57.171 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:57.171 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:57.428 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:57.428 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:57.428 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:57.428 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:57.685 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:57.685 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:57.685 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:57.685 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:57.685 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:57.685 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:57.685 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:57.685 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:57.943 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:57.943 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:57.943 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:57.943 18:09:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:58.201 18:09:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:58.201 18:09:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:58.201 18:09:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:58.201 18:09:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:58.458 18:09:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:58.458 18:09:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:20:58.458 18:09:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:20:58.458 18:09:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:20:58.715 18:09:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:20:59.648 18:09:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:20:59.648 18:09:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:59.649 18:09:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:59.649 18:09:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:59.907 18:09:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:59.907 18:09:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:59.907 18:09:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:59.907 18:09:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:00.165 18:09:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:00.165 18:09:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:00.165 18:09:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:00.165 18:09:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:00.423 18:09:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:00.423 18:09:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:00.423 18:09:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:00.423 18:09:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:00.423 18:09:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:00.423 18:09:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:00.423 18:09:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:00.423 18:09:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:00.681 18:09:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:00.681 18:09:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:00.681 18:09:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:00.681 18:09:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:00.939 18:09:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:00.939 18:09:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:00.939 18:09:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:01.197 18:09:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:21:01.197 18:09:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:02.570 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:02.570 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:02.570 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:02.570 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:02.570 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:02.570 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:02.570 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:02.570 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:02.829 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:02.829 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:02.829 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:02.829 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:02.829 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:02.829 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:02.829 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:02.829 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:03.086 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:03.086 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:03.087 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:03.087 18:09:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:03.344 18:09:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:03.344 18:09:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:03.344 18:09:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:03.344 18:09:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:03.602 18:09:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:03.602 18:09:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:03.602 18:09:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:03.602 18:09:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:21:03.860 18:09:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:21:04.794 18:09:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:04.794 18:09:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:04.794 18:09:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.794 18:09:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:05.052 18:09:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:05.052 18:09:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:05.052 18:09:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:05.052 18:09:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:05.311 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:05.311 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:05.311 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:05.311 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:05.569 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:05.569 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:05.569 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:05.569 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:05.826 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:05.826 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:05.826 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:05.826 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:05.826 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:05.826 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:05.826 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:05.826 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:06.083 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:06.083 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:06.083 18:09:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:21:06.341 18:09:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:21:06.598 18:09:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:07.531 18:09:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:07.531 18:09:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:07.531 18:09:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.531 18:09:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:07.789 18:09:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:07.789 18:09:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:07.789 18:09:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.789 18:09:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:08.047 18:09:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:08.047 18:09:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:08.047 18:09:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:08.047 18:09:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:08.047 18:09:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:08.047 18:09:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:08.047 18:09:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:08.047 18:09:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:08.305 18:09:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:08.305 18:09:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:08.305 18:09:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:08.305 18:09:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:08.562 18:09:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:08.562 18:09:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:08.562 18:09:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:08.562 18:09:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:08.820 18:09:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:08.820 18:09:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:08.820 18:09:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:21:08.820 18:09:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:09.078 18:09:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:10.011 18:09:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:10.011 18:09:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:10.011 18:09:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.011 18:09:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:10.269 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:10.269 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:10.269 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.269 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:10.527 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:10.527 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:10.527 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.527 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:10.785 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:10.785 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:10.785 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.785 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:11.043 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:11.043 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:11.043 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:11.043 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:11.043 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:11.043 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:11.043 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:11.043 18:09:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:11.301 18:09:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:11.301 18:09:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:11.558 18:09:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:11.558 18:09:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:21:11.816 18:09:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:11.816 18:09:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:13.188 18:09:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:13.188 18:09:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:13.188 18:09:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.188 18:09:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:13.188 18:09:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.188 18:09:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:13.188 18:09:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.188 18:09:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:13.188 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.188 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:13.188 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.188 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:13.445 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.445 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:13.445 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.445 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:13.703 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.703 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:13.703 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.703 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:13.960 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.960 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:13.960 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.960 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:13.960 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.960 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:13.960 18:09:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:14.218 18:09:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:21:14.508 18:09:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:21:15.478 18:09:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:15.478 18:09:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:15.478 18:09:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:15.478 18:09:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:15.736 18:09:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:15.736 18:09:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:15.736 18:09:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:15.736 18:09:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:15.994 18:09:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:15.994 18:09:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:15.994 18:09:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:15.994 18:09:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:15.994 18:09:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:15.994 18:09:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:15.994 18:09:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:15.994 18:09:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:16.252 18:09:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:16.252 18:09:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:16.252 18:09:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:16.252 18:09:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:16.510 18:09:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:16.510 18:09:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:16.510 18:09:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:16.510 18:09:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:16.510 18:09:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:16.510 18:09:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:16.510 18:09:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:16.768 18:09:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:21:17.026 18:09:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:21:17.958 18:09:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:17.958 18:09:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:17.958 18:09:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.958 18:09:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:18.216 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:18.216 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:18.216 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:18.216 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:18.474 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:18.474 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:18.474 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:18.474 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:18.732 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:18.732 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:18.732 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:18.732 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:18.732 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:18.732 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:18.732 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:18.732 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:18.990 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:18.990 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:18.990 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:18.990 18:09:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:19.248 18:09:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:19.248 18:09:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:19.248 18:09:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:21:19.506 18:09:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:21:19.506 18:09:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:21:20.878 18:09:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:21:20.878 18:09:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:20.878 18:09:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.878 18:09:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:20.878 18:09:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:20.878 18:09:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:20.878 18:09:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.878 18:09:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:20.878 18:09:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:20.878 18:09:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:20.878 18:09:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.878 18:09:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:21.136 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:21.136 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:21.136 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:21.136 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:21.394 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:21.394 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:21.394 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:21.394 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:21.652 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:21.652 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:21.652 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:21.652 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:21.910 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:21.910 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2420400 00:21:21.910 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2420400 ']' 00:21:21.910 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2420400 00:21:21.910 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:21:21.910 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.910 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2420400 00:21:21.910 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:21.910 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:21.910 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2420400' 00:21:21.910 killing process with pid 2420400 00:21:21.910 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2420400 00:21:21.910 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2420400 00:21:21.910 { 00:21:21.910 "results": [ 00:21:21.910 { 00:21:21.910 "job": "Nvme0n1", 00:21:21.910 "core_mask": "0x4", 00:21:21.910 "workload": "verify", 00:21:21.910 "status": "terminated", 00:21:21.910 "verify_range": { 00:21:21.910 "start": 0, 00:21:21.910 "length": 16384 00:21:21.910 }, 00:21:21.910 "queue_depth": 128, 00:21:21.910 "io_size": 4096, 00:21:21.910 "runtime": 27.956164, 00:21:21.910 "iops": 15932.192986133578, 00:21:21.910 "mibps": 62.23512885208429, 00:21:21.910 "io_failed": 0, 00:21:21.910 "io_timeout": 0, 00:21:21.910 "avg_latency_us": 8013.978612842751, 00:21:21.910 "min_latency_us": 90.9312, 00:21:21.910 "max_latency_us": 3019898.88 00:21:21.910 } 00:21:21.910 ], 00:21:21.910 "core_count": 1 00:21:21.910 } 00:21:22.175 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2420400 00:21:22.175 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:22.175 [2024-12-09 18:08:59.989285] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:21:22.175 [2024-12-09 18:08:59.989347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2420400 ] 00:21:22.175 [2024-12-09 18:09:00.082076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.175 [2024-12-09 18:09:00.123810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.175 Running I/O for 90 seconds... 00:21:22.175 18468.00 IOPS, 72.14 MiB/s [2024-12-09T17:09:30.154Z] 18578.00 IOPS, 72.57 MiB/s [2024-12-09T17:09:30.154Z] 18602.67 IOPS, 72.67 MiB/s [2024-12-09T17:09:30.154Z] 18609.75 IOPS, 72.69 MiB/s [2024-12-09T17:09:30.154Z] 18607.80 IOPS, 72.69 MiB/s [2024-12-09T17:09:30.154Z] 18634.50 IOPS, 72.79 MiB/s [2024-12-09T17:09:30.154Z] 18633.14 IOPS, 72.79 MiB/s [2024-12-09T17:09:30.154Z] 18628.38 IOPS, 72.77 MiB/s [2024-12-09T17:09:30.154Z] 18610.11 IOPS, 72.70 MiB/s [2024-12-09T17:09:30.154Z] 18592.70 IOPS, 72.63 MiB/s [2024-12-09T17:09:30.154Z] 18594.91 IOPS, 72.64 MiB/s [2024-12-09T17:09:30.154Z] 18595.50 IOPS, 72.64 MiB/s [2024-12-09T17:09:30.154Z] [2024-12-09 18:09:14.157191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.175 [2024-12-09 18:09:14.157233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:22.175 [2024-12-09 18:09:14.157274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.175 [2024-12-09 18:09:14.157285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:22.175 [2024-12-09 18:09:14.157298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.175 [2024-12-09 18:09:14.157307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:22.175 [2024-12-09 18:09:14.157319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.175 [2024-12-09 18:09:14.157328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:22.175 [2024-12-09 18:09:14.157339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.175 [2024-12-09 18:09:14.157348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:22.175 [2024-12-09 18:09:14.157360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.175 [2024-12-09 18:09:14.157368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:22.175 [2024-12-09 18:09:14.157380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.175 [2024-12-09 18:09:14.157389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:22.175 [2024-12-09 18:09:14.157400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.175 [2024-12-09 18:09:14.157409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:22.175 [2024-12-09 18:09:14.157421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.175 [2024-12-09 18:09:14.157430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:22.175 [2024-12-09 18:09:14.157441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.175 [2024-12-09 18:09:14.157455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:22.175 [2024-12-09 18:09:14.157467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.175 [2024-12-09 18:09:14.157476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:22.175 [2024-12-09 18:09:14.157488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.175 [2024-12-09 18:09:14.157497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:22.175 [2024-12-09 18:09:14.157508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.157985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.157994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.158006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.158014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.158026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.158035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.158046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.158055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.158066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.158075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.158087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.158095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:22.176 [2024-12-09 18:09:14.158107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.176 [2024-12-09 18:09:14.158116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.177 [2024-12-09 18:09:14.158897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:22.177 [2024-12-09 18:09:14.158909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.158918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.158929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.158938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.158953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.158962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.158975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.158985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.158996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.159005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.159017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181000 00:21:22.178 [2024-12-09 18:09:14.159026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.159038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181000 00:21:22.178 [2024-12-09 18:09:14.159048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.159059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x181000 00:21:22.178 [2024-12-09 18:09:14.159068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.159079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x181000 00:21:22.178 [2024-12-09 18:09:14.159088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.159100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.159109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.159120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.159129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.159140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.159149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.159160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.159170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.159181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.159190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.159201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.159210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.159223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.159232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.159244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.159253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.159530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.159541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.159932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.159943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.159966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.159975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.159991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.178 [2024-12-09 18:09:14.160457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:22.178 [2024-12-09 18:09:14.160473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:14.160483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:14.160499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:14.160508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:14.160524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:14.160534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:14.160550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:14.160559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:14.160575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:14.160585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:14.160601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x181000 00:21:22.179 [2024-12-09 18:09:14.160610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:14.160626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x181000 00:21:22.179 [2024-12-09 18:09:14.160636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:14.160652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x181000 00:21:22.179 [2024-12-09 18:09:14.160661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:14.160677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x181000 00:21:22.179 [2024-12-09 18:09:14.160686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:22.179 17792.00 IOPS, 69.50 MiB/s [2024-12-09T17:09:30.158Z] 16521.14 IOPS, 64.54 MiB/s [2024-12-09T17:09:30.158Z] 15419.73 IOPS, 60.23 MiB/s [2024-12-09T17:09:30.158Z] 15112.50 IOPS, 59.03 MiB/s [2024-12-09T17:09:30.158Z] 15326.76 IOPS, 59.87 MiB/s [2024-12-09T17:09:30.158Z] 15473.67 IOPS, 60.44 MiB/s [2024-12-09T17:09:30.158Z] 15439.42 IOPS, 60.31 MiB/s [2024-12-09T17:09:30.158Z] 15413.70 IOPS, 60.21 MiB/s [2024-12-09T17:09:30.158Z] 15503.52 IOPS, 60.56 MiB/s [2024-12-09T17:09:30.158Z] 15652.00 IOPS, 61.14 MiB/s [2024-12-09T17:09:30.158Z] 15783.91 IOPS, 61.66 MiB/s [2024-12-09T17:09:30.158Z] 15762.08 IOPS, 61.57 MiB/s [2024-12-09T17:09:30.158Z] 15719.56 IOPS, 61.40 MiB/s [2024-12-09T17:09:30.158Z] [2024-12-09 18:09:27.424658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x181000 00:21:22.179 [2024-12-09 18:09:27.424698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.424719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:27.424729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.424749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x181000 00:21:22.179 [2024-12-09 18:09:27.424759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.424770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:27.424780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.424791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x181000 00:21:22.179 [2024-12-09 18:09:27.424800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.424812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x181000 00:21:22.179 [2024-12-09 18:09:27.424821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.424833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x181000 00:21:22.179 [2024-12-09 18:09:27.424842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.424854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:27.424863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.424875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x181000 00:21:22.179 [2024-12-09 18:09:27.424884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.424896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x181000 00:21:22.179 [2024-12-09 18:09:27.424905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.424917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x181000 00:21:22.179 [2024-12-09 18:09:27.424926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.425285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:27.425298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.425310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:27.425320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.425331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x181000 00:21:22.179 [2024-12-09 18:09:27.425343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.425354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:27.425364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.425375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:27.425384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.425396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:27.425405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.425416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:27.425425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.425437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:27.425446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.425457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:27.425466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.425479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:27.425488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.425500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x181000 00:21:22.179 [2024-12-09 18:09:27.425509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.425521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x181000 00:21:22.179 [2024-12-09 18:09:27.425530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.425541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x181000 00:21:22.179 [2024-12-09 18:09:27.425550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.425562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:27.425571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.425582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x181000 00:21:22.179 [2024-12-09 18:09:27.425593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.425604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.179 [2024-12-09 18:09:27.425615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:22.179 [2024-12-09 18:09:27.425626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x181000 00:21:22.180 [2024-12-09 18:09:27.425635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.425647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x181000 00:21:22.180 [2024-12-09 18:09:27.425656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.425668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.180 [2024-12-09 18:09:27.425677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.425688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x181000 00:21:22.180 [2024-12-09 18:09:27.425698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.425709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x181000 00:21:22.180 [2024-12-09 18:09:27.425718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.425730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x181000 00:21:22.180 [2024-12-09 18:09:27.425739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.425750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.180 [2024-12-09 18:09:27.425760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.425771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x181000 00:21:22.180 [2024-12-09 18:09:27.425780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.425793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.180 [2024-12-09 18:09:27.425802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.425814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.180 [2024-12-09 18:09:27.425823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.425836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.180 [2024-12-09 18:09:27.425845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.425856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x181000 00:21:22.180 [2024-12-09 18:09:27.425865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.425877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.180 [2024-12-09 18:09:27.425886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.425897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.180 [2024-12-09 18:09:27.425906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.425917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.180 [2024-12-09 18:09:27.425926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.425938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.180 [2024-12-09 18:09:27.425951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.425963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x181000 00:21:22.180 [2024-12-09 18:09:27.425973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.425984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.180 [2024-12-09 18:09:27.425993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.426005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x181000 00:21:22.180 [2024-12-09 18:09:27.426013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.426025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x181000 00:21:22.180 [2024-12-09 18:09:27.426034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.426045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.180 [2024-12-09 18:09:27.426054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.426066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x181000 00:21:22.180 [2024-12-09 18:09:27.426075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.426088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181000 00:21:22.180 [2024-12-09 18:09:27.426097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.426108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.180 [2024-12-09 18:09:27.426117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.426129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.180 [2024-12-09 18:09:27.426138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.426150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.180 [2024-12-09 18:09:27.426159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.426170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181000 00:21:22.180 [2024-12-09 18:09:27.426179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.426191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x181000 00:21:22.180 [2024-12-09 18:09:27.426200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.426211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.180 [2024-12-09 18:09:27.426220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.426232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x181000 00:21:22.180 [2024-12-09 18:09:27.426241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.426252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x181000 00:21:22.180 [2024-12-09 18:09:27.426261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.426273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x181000 00:21:22.180 [2024-12-09 18:09:27.426282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:22.180 [2024-12-09 18:09:27.426293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.426302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.426313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.426324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.426335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.426344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.426356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x181000 00:21:22.181 [2024-12-09 18:09:27.426365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.426376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x181000 00:21:22.181 [2024-12-09 18:09:27.426385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.426399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.426408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.426419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.426428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.426440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x181000 00:21:22.181 [2024-12-09 18:09:27.426449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.426460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.426469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.426481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x181000 00:21:22.181 [2024-12-09 18:09:27.426490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.426502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.426511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x181000 00:21:22.181 [2024-12-09 18:09:27.428182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x181000 00:21:22.181 [2024-12-09 18:09:27.428206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.428595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.428616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.428637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.428658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.428678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.428699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x181000 00:21:22.181 [2024-12-09 18:09:27.428719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.428740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x181000 00:21:22.181 [2024-12-09 18:09:27.428761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.428781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.428801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x181000 00:21:22.181 [2024-12-09 18:09:27.428821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x181000 00:21:22.181 [2024-12-09 18:09:27.428843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x181000 00:21:22.181 [2024-12-09 18:09:27.428864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x181000 00:21:22.181 [2024-12-09 18:09:27.428884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.428905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x181000 00:21:22.181 [2024-12-09 18:09:27.428926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.428951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.428972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.428984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.428993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.429004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x181000 00:21:22.181 [2024-12-09 18:09:27.429013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.429025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x181000 00:21:22.181 [2024-12-09 18:09:27.429034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.429045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x181000 00:21:22.181 [2024-12-09 18:09:27.429054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.429066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x181000 00:21:22.181 [2024-12-09 18:09:27.429075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:22.181 [2024-12-09 18:09:27.429088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.181 [2024-12-09 18:09:27.429097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x181000 00:21:22.182 [2024-12-09 18:09:27.429117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x181000 00:21:22.182 [2024-12-09 18:09:27.429138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x181000 00:21:22.182 [2024-12-09 18:09:27.429160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x181000 00:21:22.182 [2024-12-09 18:09:27.429351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x181000 00:21:22.182 [2024-12-09 18:09:27.429434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x181000 00:21:22.182 [2024-12-09 18:09:27.429455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x181000 00:21:22.182 [2024-12-09 18:09:27.429475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x181000 00:21:22.182 [2024-12-09 18:09:27.429497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x181000 00:21:22.182 [2024-12-09 18:09:27.429538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x181000 00:21:22.182 [2024-12-09 18:09:27.429663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x181000 00:21:22.182 [2024-12-09 18:09:27.429684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181000 00:21:22.182 [2024-12-09 18:09:27.429725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181000 00:21:22.182 [2024-12-09 18:09:27.429766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x181000 00:21:22.182 [2024-12-09 18:09:27.429807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x181000 00:21:22.182 [2024-12-09 18:09:27.429869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.182 [2024-12-09 18:09:27.429934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.429951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x181000 00:21:22.182 [2024-12-09 18:09:27.429961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:22.182 [2024-12-09 18:09:27.431406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.431422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.431435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x181000 00:21:22.183 [2024-12-09 18:09:27.431444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.431456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x181000 00:21:22.183 [2024-12-09 18:09:27.431465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.431477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x181000 00:21:22.183 [2024-12-09 18:09:27.431485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.431497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x181000 00:21:22.183 [2024-12-09 18:09:27.431506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.431517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.431526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.431767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.431777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.431789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x181000 00:21:22.183 [2024-12-09 18:09:27.431798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.431813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x181000 00:21:22.183 [2024-12-09 18:09:27.431822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.431833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.431842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.431854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x181000 00:21:22.183 [2024-12-09 18:09:27.431862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.431874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.431883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.431894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.431903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.431914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.431923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.431934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x181000 00:21:22.183 [2024-12-09 18:09:27.431943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.431959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x181000 00:21:22.183 [2024-12-09 18:09:27.431968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.431980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x181000 00:21:22.183 [2024-12-09 18:09:27.431989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x181000 00:21:22.183 [2024-12-09 18:09:27.432009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x181000 00:21:22.183 [2024-12-09 18:09:27.432030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.432052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x181000 00:21:22.183 [2024-12-09 18:09:27.432072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.432092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.432112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.432132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x181000 00:21:22.183 [2024-12-09 18:09:27.432152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.432173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x181000 00:21:22.183 [2024-12-09 18:09:27.432193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.432213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.432233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.432253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.432272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.432294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.432470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.432491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.432511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.432531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.183 [2024-12-09 18:09:27.432552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x181000 00:21:22.183 [2024-12-09 18:09:27.432572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:22.183 [2024-12-09 18:09:27.432584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x181000 00:21:22.183 [2024-12-09 18:09:27.432593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.184 [2024-12-09 18:09:27.432613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.184 [2024-12-09 18:09:27.432633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.184 [2024-12-09 18:09:27.432654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x181000 00:21:22.184 [2024-12-09 18:09:27.432674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x181000 00:21:22.184 [2024-12-09 18:09:27.432698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x181000 00:21:22.184 [2024-12-09 18:09:27.432719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x181000 00:21:22.184 [2024-12-09 18:09:27.432740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.184 [2024-12-09 18:09:27.432760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.184 [2024-12-09 18:09:27.432781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x181000 00:21:22.184 [2024-12-09 18:09:27.432801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.184 [2024-12-09 18:09:27.432821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x181000 00:21:22.184 [2024-12-09 18:09:27.432841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x181000 00:21:22.184 [2024-12-09 18:09:27.432862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.184 [2024-12-09 18:09:27.432882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.184 [2024-12-09 18:09:27.432902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.184 [2024-12-09 18:09:27.432923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.184 [2024-12-09 18:09:27.432944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x181000 00:21:22.184 [2024-12-09 18:09:27.432970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.432982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181000 00:21:22.184 [2024-12-09 18:09:27.432990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.433002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181000 00:21:22.184 [2024-12-09 18:09:27.433011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.433022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x181000 00:21:22.184 [2024-12-09 18:09:27.433031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.433043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.184 [2024-12-09 18:09:27.433052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.433063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.184 [2024-12-09 18:09:27.433072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.433083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.184 [2024-12-09 18:09:27.433092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.433103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x181000 00:21:22.184 [2024-12-09 18:09:27.433112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.433124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x181000 00:21:22.184 [2024-12-09 18:09:27.433134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.433145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.184 [2024-12-09 18:09:27.433154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.434691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.184 [2024-12-09 18:09:27.434710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.434734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x181000 00:21:22.184 [2024-12-09 18:09:27.434744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.434755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x181000 00:21:22.184 [2024-12-09 18:09:27.434764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.434776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.184 [2024-12-09 18:09:27.434785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.434796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x181000 00:21:22.184 [2024-12-09 18:09:27.434805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:22.184 [2024-12-09 18:09:27.434817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x181000 00:21:22.184 [2024-12-09 18:09:27.434826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.434837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x181000 00:21:22.185 [2024-12-09 18:09:27.434849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.434860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x181000 00:21:22.185 [2024-12-09 18:09:27.434869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x181000 00:21:22.185 [2024-12-09 18:09:27.435351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x181000 00:21:22.185 [2024-12-09 18:09:27.435371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x181000 00:21:22.185 [2024-12-09 18:09:27.435496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x181000 00:21:22.185 [2024-12-09 18:09:27.435536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x181000 00:21:22.185 [2024-12-09 18:09:27.435638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x181000 00:21:22.185 [2024-12-09 18:09:27.435740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x181000 00:21:22.185 [2024-12-09 18:09:27.435760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x181000 00:21:22.185 [2024-12-09 18:09:27.435800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x181000 00:21:22.185 [2024-12-09 18:09:27.435843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x181000 00:21:22.185 [2024-12-09 18:09:27.435904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x181000 00:21:22.185 [2024-12-09 18:09:27.435925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.435970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.435983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x181000 00:21:22.185 [2024-12-09 18:09:27.435992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.436003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x181000 00:21:22.185 [2024-12-09 18:09:27.436013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.436024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x181000 00:21:22.185 [2024-12-09 18:09:27.436033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:22.185 [2024-12-09 18:09:27.436044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.185 [2024-12-09 18:09:27.436053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:22.186 [2024-12-09 18:09:27.436065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.186 [2024-12-09 18:09:27.436074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:22.186 [2024-12-09 18:09:27.436085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.186 [2024-12-09 18:09:27.436094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:22.186 [2024-12-09 18:09:27.436105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x181000 00:21:22.186 [2024-12-09 18:09:27.436114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:22.186 15729.35 IOPS, 61.44 MiB/s [2024-12-09T17:09:30.165Z] 15840.59 IOPS, 61.88 MiB/s [2024-12-09T17:09:30.165Z] Received shutdown signal, test time was about 27.956789 seconds 00:21:22.186 00:21:22.186 Latency(us) 00:21:22.186 [2024-12-09T17:09:30.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.186 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:22.186 Verification LBA range: start 0x0 length 0x4000 00:21:22.186 Nvme0n1 : 27.96 15932.19 62.24 0.00 0.00 8013.98 90.93 3019898.88 00:21:22.186 [2024-12-09T17:09:30.165Z] =================================================================================================================== 00:21:22.186 [2024-12-09T17:09:30.165Z] Total : 15932.19 62.24 0.00 0.00 8013.98 90.93 3019898.88 00:21:22.186 18:09:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.186 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:21:22.186 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:22.186 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:21:22.186 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:22.186 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:21:22.186 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:22.186 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:22.186 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:21:22.186 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:22.186 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:22.186 rmmod nvme_rdma 00:21:22.186 rmmod nvme_fabrics 00:21:22.444 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:22.444 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:21:22.444 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:21:22.444 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2420018 ']' 00:21:22.444 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2420018 00:21:22.444 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2420018 ']' 00:21:22.444 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2420018 00:21:22.444 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:21:22.444 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.444 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2420018 00:21:22.444 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:22.444 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:22.444 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2420018' 00:21:22.444 killing process with pid 2420018 00:21:22.444 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2420018 00:21:22.444 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2420018 00:21:22.703 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:22.703 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:22.703 00:21:22.703 real 0m40.206s 00:21:22.703 user 1m53.166s 00:21:22.703 sys 0m9.623s 00:21:22.703 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.703 18:09:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:22.703 ************************************ 00:21:22.703 END TEST nvmf_host_multipath_status 00:21:22.703 ************************************ 00:21:22.703 18:09:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:21:22.703 18:09:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:22.703 18:09:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.703 18:09:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.703 ************************************ 00:21:22.703 START TEST nvmf_discovery_remove_ifc 00:21:22.703 ************************************ 00:21:22.703 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:21:22.703 * Looking for test storage... 00:21:22.703 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:22.703 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:22.703 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:21:22.703 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:21:22.963 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:22.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.964 --rc genhtml_branch_coverage=1 00:21:22.964 --rc genhtml_function_coverage=1 00:21:22.964 --rc genhtml_legend=1 00:21:22.964 --rc geninfo_all_blocks=1 00:21:22.964 --rc geninfo_unexecuted_blocks=1 00:21:22.964 00:21:22.964 ' 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:22.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.964 --rc genhtml_branch_coverage=1 00:21:22.964 --rc genhtml_function_coverage=1 00:21:22.964 --rc genhtml_legend=1 00:21:22.964 --rc geninfo_all_blocks=1 00:21:22.964 --rc geninfo_unexecuted_blocks=1 00:21:22.964 00:21:22.964 ' 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:22.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.964 --rc genhtml_branch_coverage=1 00:21:22.964 --rc genhtml_function_coverage=1 00:21:22.964 --rc genhtml_legend=1 00:21:22.964 --rc geninfo_all_blocks=1 00:21:22.964 --rc geninfo_unexecuted_blocks=1 00:21:22.964 00:21:22.964 ' 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:22.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.964 --rc genhtml_branch_coverage=1 00:21:22.964 --rc genhtml_function_coverage=1 00:21:22.964 --rc genhtml_legend=1 00:21:22.964 --rc geninfo_all_blocks=1 00:21:22.964 --rc geninfo_unexecuted_blocks=1 00:21:22.964 00:21:22.964 ' 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:22.964 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:21:22.964 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:21:22.964 00:21:22.964 real 0m0.231s 00:21:22.964 user 0m0.120s 00:21:22.964 sys 0m0.126s 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:22.964 ************************************ 00:21:22.964 END TEST nvmf_discovery_remove_ifc 00:21:22.964 ************************************ 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.964 ************************************ 00:21:22.964 START TEST nvmf_identify_kernel_target 00:21:22.964 ************************************ 00:21:22.964 18:09:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:21:23.225 * Looking for test storage... 00:21:23.225 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:23.225 18:09:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:23.225 18:09:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:21:23.225 18:09:30 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:23.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.225 --rc genhtml_branch_coverage=1 00:21:23.225 --rc genhtml_function_coverage=1 00:21:23.225 --rc genhtml_legend=1 00:21:23.225 --rc geninfo_all_blocks=1 00:21:23.225 --rc geninfo_unexecuted_blocks=1 00:21:23.225 00:21:23.225 ' 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:23.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.225 --rc genhtml_branch_coverage=1 00:21:23.225 --rc genhtml_function_coverage=1 00:21:23.225 --rc genhtml_legend=1 00:21:23.225 --rc geninfo_all_blocks=1 00:21:23.225 --rc geninfo_unexecuted_blocks=1 00:21:23.225 00:21:23.225 ' 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:23.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.225 --rc genhtml_branch_coverage=1 00:21:23.225 --rc genhtml_function_coverage=1 00:21:23.225 --rc genhtml_legend=1 00:21:23.225 --rc geninfo_all_blocks=1 00:21:23.225 --rc geninfo_unexecuted_blocks=1 00:21:23.225 00:21:23.225 ' 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:23.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.225 --rc genhtml_branch_coverage=1 00:21:23.225 --rc genhtml_function_coverage=1 00:21:23.225 --rc genhtml_legend=1 00:21:23.225 --rc geninfo_all_blocks=1 00:21:23.225 --rc geninfo_unexecuted_blocks=1 00:21:23.225 00:21:23.225 ' 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:23.225 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:23.226 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:21:23.226 18:09:31 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:31.349 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:31.349 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:31.349 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:31.349 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:31.349 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:31.350 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:31.350 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:31.350 altname enp217s0f0np0 00:21:31.350 altname ens818f0np0 00:21:31.350 inet 192.168.100.8/24 scope global mlx_0_0 00:21:31.350 valid_lft forever preferred_lft forever 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:31.350 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:31.350 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:31.350 altname enp217s0f1np1 00:21:31.350 altname ens818f1np1 00:21:31.350 inet 192.168.100.9/24 scope global mlx_0_1 00:21:31.350 valid_lft forever preferred_lft forever 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:31.350 192.168.100.9' 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:31.350 192.168.100.9' 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:31.350 192.168.100.9' 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:21:31.350 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:21:31.351 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:21:31.351 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:31.351 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:31.351 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:31.351 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:31.351 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:21:31.351 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:31.351 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:31.351 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:31.351 18:09:38 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:21:33.889 Waiting for block devices as requested 00:21:33.889 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:33.889 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:34.149 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:34.149 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:34.149 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:34.408 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:34.408 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:34.408 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:34.667 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:34.667 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:34.667 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:34.926 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:34.926 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:34.926 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:35.186 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:35.186 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:35.186 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:21:35.446 No valid GPT data, bailing 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:35.446 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:21:35.706 00:21:35.706 Discovery Log Number of Records 2, Generation counter 2 00:21:35.706 =====Discovery Log Entry 0====== 00:21:35.706 trtype: rdma 00:21:35.706 adrfam: ipv4 00:21:35.706 subtype: current discovery subsystem 00:21:35.706 treq: not specified, sq flow control disable supported 00:21:35.706 portid: 1 00:21:35.706 trsvcid: 4420 00:21:35.706 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:35.706 traddr: 192.168.100.8 00:21:35.706 eflags: none 00:21:35.706 rdma_prtype: not specified 00:21:35.706 rdma_qptype: connected 00:21:35.706 rdma_cms: rdma-cm 00:21:35.706 rdma_pkey: 0x0000 00:21:35.706 =====Discovery Log Entry 1====== 00:21:35.706 trtype: rdma 00:21:35.706 adrfam: ipv4 00:21:35.706 subtype: nvme subsystem 00:21:35.706 treq: not specified, sq flow control disable supported 00:21:35.706 portid: 1 00:21:35.706 trsvcid: 4420 00:21:35.706 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:35.706 traddr: 192.168.100.8 00:21:35.706 eflags: none 00:21:35.706 rdma_prtype: not specified 00:21:35.706 rdma_qptype: connected 00:21:35.706 rdma_cms: rdma-cm 00:21:35.706 rdma_pkey: 0x0000 00:21:35.706 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:21:35.706 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:21:35.706 ===================================================== 00:21:35.706 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:35.706 ===================================================== 00:21:35.706 Controller Capabilities/Features 00:21:35.706 ================================ 00:21:35.706 Vendor ID: 0000 00:21:35.706 Subsystem Vendor ID: 0000 00:21:35.706 Serial Number: bc36da7bce91f23f5a86 00:21:35.706 Model Number: Linux 00:21:35.706 Firmware Version: 6.8.9-20 00:21:35.706 Recommended Arb Burst: 0 00:21:35.706 IEEE OUI Identifier: 00 00 00 00:21:35.706 Multi-path I/O 00:21:35.706 May have multiple subsystem ports: No 00:21:35.706 May have multiple controllers: No 00:21:35.706 Associated with SR-IOV VF: No 00:21:35.706 Max Data Transfer Size: Unlimited 00:21:35.706 Max Number of Namespaces: 0 00:21:35.706 Max Number of I/O Queues: 1024 00:21:35.706 NVMe Specification Version (VS): 1.3 00:21:35.706 NVMe Specification Version (Identify): 1.3 00:21:35.706 Maximum Queue Entries: 128 00:21:35.706 Contiguous Queues Required: No 00:21:35.706 Arbitration Mechanisms Supported 00:21:35.706 Weighted Round Robin: Not Supported 00:21:35.706 Vendor Specific: Not Supported 00:21:35.706 Reset Timeout: 7500 ms 00:21:35.706 Doorbell Stride: 4 bytes 00:21:35.706 NVM Subsystem Reset: Not Supported 00:21:35.706 Command Sets Supported 00:21:35.706 NVM Command Set: Supported 00:21:35.706 Boot Partition: Not Supported 00:21:35.706 Memory Page Size Minimum: 4096 bytes 00:21:35.706 Memory Page Size Maximum: 4096 bytes 00:21:35.706 Persistent Memory Region: Not Supported 00:21:35.706 Optional Asynchronous Events Supported 00:21:35.706 Namespace Attribute Notices: Not Supported 00:21:35.706 Firmware Activation Notices: Not Supported 00:21:35.706 ANA Change Notices: Not Supported 00:21:35.706 PLE Aggregate Log Change Notices: Not Supported 00:21:35.706 LBA Status Info Alert Notices: Not Supported 00:21:35.706 EGE Aggregate Log Change Notices: Not Supported 00:21:35.706 Normal NVM Subsystem Shutdown event: Not Supported 00:21:35.706 Zone Descriptor Change Notices: Not Supported 00:21:35.706 Discovery Log Change Notices: Supported 00:21:35.706 Controller Attributes 00:21:35.706 128-bit Host Identifier: Not Supported 00:21:35.706 Non-Operational Permissive Mode: Not Supported 00:21:35.706 NVM Sets: Not Supported 00:21:35.706 Read Recovery Levels: Not Supported 00:21:35.706 Endurance Groups: Not Supported 00:21:35.706 Predictable Latency Mode: Not Supported 00:21:35.706 Traffic Based Keep ALive: Not Supported 00:21:35.706 Namespace Granularity: Not Supported 00:21:35.706 SQ Associations: Not Supported 00:21:35.706 UUID List: Not Supported 00:21:35.706 Multi-Domain Subsystem: Not Supported 00:21:35.706 Fixed Capacity Management: Not Supported 00:21:35.706 Variable Capacity Management: Not Supported 00:21:35.706 Delete Endurance Group: Not Supported 00:21:35.706 Delete NVM Set: Not Supported 00:21:35.706 Extended LBA Formats Supported: Not Supported 00:21:35.706 Flexible Data Placement Supported: Not Supported 00:21:35.706 00:21:35.706 Controller Memory Buffer Support 00:21:35.706 ================================ 00:21:35.706 Supported: No 00:21:35.706 00:21:35.706 Persistent Memory Region Support 00:21:35.706 ================================ 00:21:35.706 Supported: No 00:21:35.706 00:21:35.706 Admin Command Set Attributes 00:21:35.706 ============================ 00:21:35.706 Security Send/Receive: Not Supported 00:21:35.706 Format NVM: Not Supported 00:21:35.706 Firmware Activate/Download: Not Supported 00:21:35.706 Namespace Management: Not Supported 00:21:35.706 Device Self-Test: Not Supported 00:21:35.706 Directives: Not Supported 00:21:35.706 NVMe-MI: Not Supported 00:21:35.706 Virtualization Management: Not Supported 00:21:35.706 Doorbell Buffer Config: Not Supported 00:21:35.706 Get LBA Status Capability: Not Supported 00:21:35.706 Command & Feature Lockdown Capability: Not Supported 00:21:35.706 Abort Command Limit: 1 00:21:35.706 Async Event Request Limit: 1 00:21:35.706 Number of Firmware Slots: N/A 00:21:35.706 Firmware Slot 1 Read-Only: N/A 00:21:35.706 Firmware Activation Without Reset: N/A 00:21:35.706 Multiple Update Detection Support: N/A 00:21:35.706 Firmware Update Granularity: No Information Provided 00:21:35.706 Per-Namespace SMART Log: No 00:21:35.706 Asymmetric Namespace Access Log Page: Not Supported 00:21:35.706 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:35.706 Command Effects Log Page: Not Supported 00:21:35.706 Get Log Page Extended Data: Supported 00:21:35.706 Telemetry Log Pages: Not Supported 00:21:35.706 Persistent Event Log Pages: Not Supported 00:21:35.706 Supported Log Pages Log Page: May Support 00:21:35.706 Commands Supported & Effects Log Page: Not Supported 00:21:35.706 Feature Identifiers & Effects Log Page:May Support 00:21:35.706 NVMe-MI Commands & Effects Log Page: May Support 00:21:35.706 Data Area 4 for Telemetry Log: Not Supported 00:21:35.706 Error Log Page Entries Supported: 1 00:21:35.706 Keep Alive: Not Supported 00:21:35.706 00:21:35.706 NVM Command Set Attributes 00:21:35.706 ========================== 00:21:35.706 Submission Queue Entry Size 00:21:35.706 Max: 1 00:21:35.706 Min: 1 00:21:35.706 Completion Queue Entry Size 00:21:35.706 Max: 1 00:21:35.706 Min: 1 00:21:35.706 Number of Namespaces: 0 00:21:35.706 Compare Command: Not Supported 00:21:35.706 Write Uncorrectable Command: Not Supported 00:21:35.706 Dataset Management Command: Not Supported 00:21:35.706 Write Zeroes Command: Not Supported 00:21:35.706 Set Features Save Field: Not Supported 00:21:35.706 Reservations: Not Supported 00:21:35.706 Timestamp: Not Supported 00:21:35.706 Copy: Not Supported 00:21:35.706 Volatile Write Cache: Not Present 00:21:35.706 Atomic Write Unit (Normal): 1 00:21:35.706 Atomic Write Unit (PFail): 1 00:21:35.706 Atomic Compare & Write Unit: 1 00:21:35.706 Fused Compare & Write: Not Supported 00:21:35.706 Scatter-Gather List 00:21:35.706 SGL Command Set: Supported 00:21:35.706 SGL Keyed: Supported 00:21:35.706 SGL Bit Bucket Descriptor: Not Supported 00:21:35.706 SGL Metadata Pointer: Not Supported 00:21:35.706 Oversized SGL: Not Supported 00:21:35.706 SGL Metadata Address: Not Supported 00:21:35.706 SGL Offset: Supported 00:21:35.706 Transport SGL Data Block: Not Supported 00:21:35.706 Replay Protected Memory Block: Not Supported 00:21:35.706 00:21:35.706 Firmware Slot Information 00:21:35.706 ========================= 00:21:35.706 Active slot: 0 00:21:35.706 00:21:35.706 00:21:35.706 Error Log 00:21:35.706 ========= 00:21:35.706 00:21:35.706 Active Namespaces 00:21:35.707 ================= 00:21:35.707 Discovery Log Page 00:21:35.707 ================== 00:21:35.707 Generation Counter: 2 00:21:35.707 Number of Records: 2 00:21:35.707 Record Format: 0 00:21:35.707 00:21:35.707 Discovery Log Entry 0 00:21:35.707 ---------------------- 00:21:35.707 Transport Type: 1 (RDMA) 00:21:35.707 Address Family: 1 (IPv4) 00:21:35.707 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:35.707 Entry Flags: 00:21:35.707 Duplicate Returned Information: 0 00:21:35.707 Explicit Persistent Connection Support for Discovery: 0 00:21:35.707 Transport Requirements: 00:21:35.707 Secure Channel: Not Specified 00:21:35.707 Port ID: 1 (0x0001) 00:21:35.707 Controller ID: 65535 (0xffff) 00:21:35.707 Admin Max SQ Size: 32 00:21:35.707 Transport Service Identifier: 4420 00:21:35.707 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:35.707 Transport Address: 192.168.100.8 00:21:35.707 Transport Specific Address Subtype - RDMA 00:21:35.707 RDMA QP Service Type: 1 (Reliable Connected) 00:21:35.707 RDMA Provider Type: 1 (No provider specified) 00:21:35.707 RDMA CM Service: 1 (RDMA_CM) 00:21:35.707 Discovery Log Entry 1 00:21:35.707 ---------------------- 00:21:35.707 Transport Type: 1 (RDMA) 00:21:35.707 Address Family: 1 (IPv4) 00:21:35.707 Subsystem Type: 2 (NVM Subsystem) 00:21:35.707 Entry Flags: 00:21:35.707 Duplicate Returned Information: 0 00:21:35.707 Explicit Persistent Connection Support for Discovery: 0 00:21:35.707 Transport Requirements: 00:21:35.707 Secure Channel: Not Specified 00:21:35.707 Port ID: 1 (0x0001) 00:21:35.707 Controller ID: 65535 (0xffff) 00:21:35.707 Admin Max SQ Size: 32 00:21:35.707 Transport Service Identifier: 4420 00:21:35.707 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:21:35.707 Transport Address: 192.168.100.8 00:21:35.707 Transport Specific Address Subtype - RDMA 00:21:35.707 RDMA QP Service Type: 1 (Reliable Connected) 00:21:35.967 RDMA Provider Type: 1 (No provider specified) 00:21:35.967 RDMA CM Service: 1 (RDMA_CM) 00:21:35.967 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:35.967 get_feature(0x01) failed 00:21:35.967 get_feature(0x02) failed 00:21:35.967 get_feature(0x04) failed 00:21:35.967 ===================================================== 00:21:35.967 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:21:35.967 ===================================================== 00:21:35.967 Controller Capabilities/Features 00:21:35.967 ================================ 00:21:35.967 Vendor ID: 0000 00:21:35.967 Subsystem Vendor ID: 0000 00:21:35.967 Serial Number: 8f3a12053825a725be31 00:21:35.967 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:21:35.967 Firmware Version: 6.8.9-20 00:21:35.967 Recommended Arb Burst: 6 00:21:35.967 IEEE OUI Identifier: 00 00 00 00:21:35.967 Multi-path I/O 00:21:35.967 May have multiple subsystem ports: Yes 00:21:35.967 May have multiple controllers: Yes 00:21:35.967 Associated with SR-IOV VF: No 00:21:35.967 Max Data Transfer Size: 1048576 00:21:35.967 Max Number of Namespaces: 1024 00:21:35.967 Max Number of I/O Queues: 128 00:21:35.967 NVMe Specification Version (VS): 1.3 00:21:35.967 NVMe Specification Version (Identify): 1.3 00:21:35.967 Maximum Queue Entries: 128 00:21:35.967 Contiguous Queues Required: No 00:21:35.967 Arbitration Mechanisms Supported 00:21:35.967 Weighted Round Robin: Not Supported 00:21:35.967 Vendor Specific: Not Supported 00:21:35.967 Reset Timeout: 7500 ms 00:21:35.967 Doorbell Stride: 4 bytes 00:21:35.967 NVM Subsystem Reset: Not Supported 00:21:35.967 Command Sets Supported 00:21:35.967 NVM Command Set: Supported 00:21:35.967 Boot Partition: Not Supported 00:21:35.967 Memory Page Size Minimum: 4096 bytes 00:21:35.967 Memory Page Size Maximum: 4096 bytes 00:21:35.967 Persistent Memory Region: Not Supported 00:21:35.967 Optional Asynchronous Events Supported 00:21:35.967 Namespace Attribute Notices: Supported 00:21:35.967 Firmware Activation Notices: Not Supported 00:21:35.967 ANA Change Notices: Supported 00:21:35.967 PLE Aggregate Log Change Notices: Not Supported 00:21:35.967 LBA Status Info Alert Notices: Not Supported 00:21:35.967 EGE Aggregate Log Change Notices: Not Supported 00:21:35.967 Normal NVM Subsystem Shutdown event: Not Supported 00:21:35.967 Zone Descriptor Change Notices: Not Supported 00:21:35.967 Discovery Log Change Notices: Not Supported 00:21:35.967 Controller Attributes 00:21:35.967 128-bit Host Identifier: Supported 00:21:35.967 Non-Operational Permissive Mode: Not Supported 00:21:35.967 NVM Sets: Not Supported 00:21:35.967 Read Recovery Levels: Not Supported 00:21:35.967 Endurance Groups: Not Supported 00:21:35.967 Predictable Latency Mode: Not Supported 00:21:35.967 Traffic Based Keep ALive: Supported 00:21:35.967 Namespace Granularity: Not Supported 00:21:35.967 SQ Associations: Not Supported 00:21:35.967 UUID List: Not Supported 00:21:35.967 Multi-Domain Subsystem: Not Supported 00:21:35.967 Fixed Capacity Management: Not Supported 00:21:35.967 Variable Capacity Management: Not Supported 00:21:35.967 Delete Endurance Group: Not Supported 00:21:35.967 Delete NVM Set: Not Supported 00:21:35.967 Extended LBA Formats Supported: Not Supported 00:21:35.967 Flexible Data Placement Supported: Not Supported 00:21:35.967 00:21:35.967 Controller Memory Buffer Support 00:21:35.967 ================================ 00:21:35.967 Supported: No 00:21:35.967 00:21:35.967 Persistent Memory Region Support 00:21:35.967 ================================ 00:21:35.967 Supported: No 00:21:35.967 00:21:35.967 Admin Command Set Attributes 00:21:35.967 ============================ 00:21:35.967 Security Send/Receive: Not Supported 00:21:35.967 Format NVM: Not Supported 00:21:35.967 Firmware Activate/Download: Not Supported 00:21:35.967 Namespace Management: Not Supported 00:21:35.967 Device Self-Test: Not Supported 00:21:35.967 Directives: Not Supported 00:21:35.967 NVMe-MI: Not Supported 00:21:35.967 Virtualization Management: Not Supported 00:21:35.967 Doorbell Buffer Config: Not Supported 00:21:35.967 Get LBA Status Capability: Not Supported 00:21:35.967 Command & Feature Lockdown Capability: Not Supported 00:21:35.967 Abort Command Limit: 4 00:21:35.967 Async Event Request Limit: 4 00:21:35.967 Number of Firmware Slots: N/A 00:21:35.967 Firmware Slot 1 Read-Only: N/A 00:21:35.967 Firmware Activation Without Reset: N/A 00:21:35.967 Multiple Update Detection Support: N/A 00:21:35.967 Firmware Update Granularity: No Information Provided 00:21:35.967 Per-Namespace SMART Log: Yes 00:21:35.967 Asymmetric Namespace Access Log Page: Supported 00:21:35.968 ANA Transition Time : 10 sec 00:21:35.968 00:21:35.968 Asymmetric Namespace Access Capabilities 00:21:35.968 ANA Optimized State : Supported 00:21:35.968 ANA Non-Optimized State : Supported 00:21:35.968 ANA Inaccessible State : Supported 00:21:35.968 ANA Persistent Loss State : Supported 00:21:35.968 ANA Change State : Supported 00:21:35.968 ANAGRPID is not changed : No 00:21:35.968 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:21:35.968 00:21:35.968 ANA Group Identifier Maximum : 128 00:21:35.968 Number of ANA Group Identifiers : 128 00:21:35.968 Max Number of Allowed Namespaces : 1024 00:21:35.968 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:21:35.968 Command Effects Log Page: Supported 00:21:35.968 Get Log Page Extended Data: Supported 00:21:35.968 Telemetry Log Pages: Not Supported 00:21:35.968 Persistent Event Log Pages: Not Supported 00:21:35.968 Supported Log Pages Log Page: May Support 00:21:35.968 Commands Supported & Effects Log Page: Not Supported 00:21:35.968 Feature Identifiers & Effects Log Page:May Support 00:21:35.968 NVMe-MI Commands & Effects Log Page: May Support 00:21:35.968 Data Area 4 for Telemetry Log: Not Supported 00:21:35.968 Error Log Page Entries Supported: 128 00:21:35.968 Keep Alive: Supported 00:21:35.968 Keep Alive Granularity: 1000 ms 00:21:35.968 00:21:35.968 NVM Command Set Attributes 00:21:35.968 ========================== 00:21:35.968 Submission Queue Entry Size 00:21:35.968 Max: 64 00:21:35.968 Min: 64 00:21:35.968 Completion Queue Entry Size 00:21:35.968 Max: 16 00:21:35.968 Min: 16 00:21:35.968 Number of Namespaces: 1024 00:21:35.968 Compare Command: Not Supported 00:21:35.968 Write Uncorrectable Command: Not Supported 00:21:35.968 Dataset Management Command: Supported 00:21:35.968 Write Zeroes Command: Supported 00:21:35.968 Set Features Save Field: Not Supported 00:21:35.968 Reservations: Not Supported 00:21:35.968 Timestamp: Not Supported 00:21:35.968 Copy: Not Supported 00:21:35.968 Volatile Write Cache: Present 00:21:35.968 Atomic Write Unit (Normal): 1 00:21:35.968 Atomic Write Unit (PFail): 1 00:21:35.968 Atomic Compare & Write Unit: 1 00:21:35.968 Fused Compare & Write: Not Supported 00:21:35.968 Scatter-Gather List 00:21:35.968 SGL Command Set: Supported 00:21:35.968 SGL Keyed: Supported 00:21:35.968 SGL Bit Bucket Descriptor: Not Supported 00:21:35.968 SGL Metadata Pointer: Not Supported 00:21:35.968 Oversized SGL: Not Supported 00:21:35.968 SGL Metadata Address: Not Supported 00:21:35.968 SGL Offset: Supported 00:21:35.968 Transport SGL Data Block: Not Supported 00:21:35.968 Replay Protected Memory Block: Not Supported 00:21:35.968 00:21:35.968 Firmware Slot Information 00:21:35.968 ========================= 00:21:35.968 Active slot: 0 00:21:35.968 00:21:35.968 Asymmetric Namespace Access 00:21:35.968 =========================== 00:21:35.968 Change Count : 0 00:21:35.968 Number of ANA Group Descriptors : 1 00:21:35.968 ANA Group Descriptor : 0 00:21:35.968 ANA Group ID : 1 00:21:35.968 Number of NSID Values : 1 00:21:35.968 Change Count : 0 00:21:35.968 ANA State : 1 00:21:35.968 Namespace Identifier : 1 00:21:35.968 00:21:35.968 Commands Supported and Effects 00:21:35.968 ============================== 00:21:35.968 Admin Commands 00:21:35.968 -------------- 00:21:35.968 Get Log Page (02h): Supported 00:21:35.968 Identify (06h): Supported 00:21:35.968 Abort (08h): Supported 00:21:35.968 Set Features (09h): Supported 00:21:35.968 Get Features (0Ah): Supported 00:21:35.968 Asynchronous Event Request (0Ch): Supported 00:21:35.968 Keep Alive (18h): Supported 00:21:35.968 I/O Commands 00:21:35.968 ------------ 00:21:35.968 Flush (00h): Supported 00:21:35.968 Write (01h): Supported LBA-Change 00:21:35.968 Read (02h): Supported 00:21:35.968 Write Zeroes (08h): Supported LBA-Change 00:21:35.968 Dataset Management (09h): Supported 00:21:35.968 00:21:35.968 Error Log 00:21:35.968 ========= 00:21:35.968 Entry: 0 00:21:35.968 Error Count: 0x3 00:21:35.968 Submission Queue Id: 0x0 00:21:35.968 Command Id: 0x5 00:21:35.968 Phase Bit: 0 00:21:35.968 Status Code: 0x2 00:21:35.968 Status Code Type: 0x0 00:21:35.968 Do Not Retry: 1 00:21:35.968 Error Location: 0x28 00:21:35.968 LBA: 0x0 00:21:35.968 Namespace: 0x0 00:21:35.968 Vendor Log Page: 0x0 00:21:35.968 ----------- 00:21:35.968 Entry: 1 00:21:35.968 Error Count: 0x2 00:21:35.968 Submission Queue Id: 0x0 00:21:35.968 Command Id: 0x5 00:21:35.968 Phase Bit: 0 00:21:35.968 Status Code: 0x2 00:21:35.968 Status Code Type: 0x0 00:21:35.968 Do Not Retry: 1 00:21:35.968 Error Location: 0x28 00:21:35.968 LBA: 0x0 00:21:35.968 Namespace: 0x0 00:21:35.968 Vendor Log Page: 0x0 00:21:35.968 ----------- 00:21:35.968 Entry: 2 00:21:35.968 Error Count: 0x1 00:21:35.968 Submission Queue Id: 0x0 00:21:35.968 Command Id: 0x0 00:21:35.968 Phase Bit: 0 00:21:35.968 Status Code: 0x2 00:21:35.968 Status Code Type: 0x0 00:21:35.968 Do Not Retry: 1 00:21:35.968 Error Location: 0x28 00:21:35.968 LBA: 0x0 00:21:35.968 Namespace: 0x0 00:21:35.968 Vendor Log Page: 0x0 00:21:35.968 00:21:35.968 Number of Queues 00:21:35.968 ================ 00:21:35.968 Number of I/O Submission Queues: 128 00:21:35.968 Number of I/O Completion Queues: 128 00:21:35.968 00:21:35.968 ZNS Specific Controller Data 00:21:35.968 ============================ 00:21:35.968 Zone Append Size Limit: 0 00:21:35.968 00:21:35.968 00:21:35.968 Active Namespaces 00:21:35.968 ================= 00:21:35.968 get_feature(0x05) failed 00:21:35.968 Namespace ID:1 00:21:35.968 Command Set Identifier: NVM (00h) 00:21:35.968 Deallocate: Supported 00:21:35.968 Deallocated/Unwritten Error: Not Supported 00:21:35.968 Deallocated Read Value: Unknown 00:21:35.968 Deallocate in Write Zeroes: Not Supported 00:21:35.968 Deallocated Guard Field: 0xFFFF 00:21:35.968 Flush: Supported 00:21:35.968 Reservation: Not Supported 00:21:35.968 Namespace Sharing Capabilities: Multiple Controllers 00:21:35.968 Size (in LBAs): 3907029168 (1863GiB) 00:21:35.968 Capacity (in LBAs): 3907029168 (1863GiB) 00:21:35.968 Utilization (in LBAs): 3907029168 (1863GiB) 00:21:35.968 UUID: a41079c9-8579-4506-8bc4-6546243a0d9d 00:21:35.968 Thin Provisioning: Not Supported 00:21:35.968 Per-NS Atomic Units: Yes 00:21:35.968 Atomic Boundary Size (Normal): 0 00:21:35.968 Atomic Boundary Size (PFail): 0 00:21:35.968 Atomic Boundary Offset: 0 00:21:35.968 NGUID/EUI64 Never Reused: No 00:21:35.968 ANA group ID: 1 00:21:35.968 Namespace Write Protected: No 00:21:35.968 Number of LBA Formats: 1 00:21:35.968 Current LBA Format: LBA Format #00 00:21:35.968 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:35.968 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:35.968 rmmod nvme_rdma 00:21:35.968 rmmod nvme_fabrics 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:35.968 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:36.228 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:36.228 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:36.228 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:21:36.228 18:09:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:21:39.513 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:21:39.513 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:21:39.513 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:21:39.513 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:21:39.513 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:21:39.513 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:21:39.513 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:21:39.513 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:21:39.513 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:21:39.513 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:21:39.772 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:21:39.772 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:21:39.772 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:21:39.772 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:21:39.772 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:21:39.772 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:21:41.677 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:21:41.677 00:21:41.677 real 0m18.791s 00:21:41.677 user 0m5.128s 00:21:41.677 sys 0m10.972s 00:21:41.677 18:09:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:41.677 18:09:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.677 ************************************ 00:21:41.677 END TEST nvmf_identify_kernel_target 00:21:41.677 ************************************ 00:21:41.936 18:09:49 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:21:41.936 18:09:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:41.936 18:09:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:41.936 18:09:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.936 ************************************ 00:21:41.936 START TEST nvmf_auth_host 00:21:41.936 ************************************ 00:21:41.936 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:21:41.936 * Looking for test storage... 00:21:41.936 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:41.936 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:41.937 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:41.937 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:41.937 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:41.937 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:41.937 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:41.937 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:41.937 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:42.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.196 --rc genhtml_branch_coverage=1 00:21:42.196 --rc genhtml_function_coverage=1 00:21:42.196 --rc genhtml_legend=1 00:21:42.196 --rc geninfo_all_blocks=1 00:21:42.196 --rc geninfo_unexecuted_blocks=1 00:21:42.196 00:21:42.196 ' 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:42.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.196 --rc genhtml_branch_coverage=1 00:21:42.196 --rc genhtml_function_coverage=1 00:21:42.196 --rc genhtml_legend=1 00:21:42.196 --rc geninfo_all_blocks=1 00:21:42.196 --rc geninfo_unexecuted_blocks=1 00:21:42.196 00:21:42.196 ' 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:42.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.196 --rc genhtml_branch_coverage=1 00:21:42.196 --rc genhtml_function_coverage=1 00:21:42.196 --rc genhtml_legend=1 00:21:42.196 --rc geninfo_all_blocks=1 00:21:42.196 --rc geninfo_unexecuted_blocks=1 00:21:42.196 00:21:42.196 ' 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:42.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.196 --rc genhtml_branch_coverage=1 00:21:42.196 --rc genhtml_function_coverage=1 00:21:42.196 --rc genhtml_legend=1 00:21:42.196 --rc geninfo_all_blocks=1 00:21:42.196 --rc geninfo_unexecuted_blocks=1 00:21:42.196 00:21:42.196 ' 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:42.196 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:42.197 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:21:42.197 18:09:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.318 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:50.319 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:50.319 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:50.319 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:50.319 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:50.319 18:09:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:50.319 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:50.319 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:50.319 altname enp217s0f0np0 00:21:50.319 altname ens818f0np0 00:21:50.319 inet 192.168.100.8/24 scope global mlx_0_0 00:21:50.319 valid_lft forever preferred_lft forever 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:50.319 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:50.319 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:50.319 altname enp217s0f1np1 00:21:50.319 altname ens818f1np1 00:21:50.319 inet 192.168.100.9/24 scope global mlx_0_1 00:21:50.319 valid_lft forever preferred_lft forever 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:50.319 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:50.320 192.168.100.9' 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:50.320 192.168.100.9' 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:50.320 192.168.100.9' 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2436496 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2436496 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2436496 ']' 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.320 18:09:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=51c1916b070fe26308eba70e347afddc 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.PB9 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 51c1916b070fe26308eba70e347afddc 0 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 51c1916b070fe26308eba70e347afddc 0 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=51c1916b070fe26308eba70e347afddc 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.PB9 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.PB9 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.PB9 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=06443c4f1c5f81bc4951193dcee33b53087316d01caea764308f0f71f5ee5095 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.1Jl 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 06443c4f1c5f81bc4951193dcee33b53087316d01caea764308f0f71f5ee5095 3 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 06443c4f1c5f81bc4951193dcee33b53087316d01caea764308f0f71f5ee5095 3 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=06443c4f1c5f81bc4951193dcee33b53087316d01caea764308f0f71f5ee5095 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:21:50.320 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.1Jl 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.1Jl 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.1Jl 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=98f0e3f4ad96a4318115c5045793c7a9f5497ee9d60eba60 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.knt 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 98f0e3f4ad96a4318115c5045793c7a9f5497ee9d60eba60 0 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 98f0e3f4ad96a4318115c5045793c7a9f5497ee9d60eba60 0 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=98f0e3f4ad96a4318115c5045793c7a9f5497ee9d60eba60 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.knt 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.knt 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.knt 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4b2e31fb432c81e434fb2b13203019fa8a1429065153b698 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.6rJ 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4b2e31fb432c81e434fb2b13203019fa8a1429065153b698 2 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4b2e31fb432c81e434fb2b13203019fa8a1429065153b698 2 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4b2e31fb432c81e434fb2b13203019fa8a1429065153b698 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.6rJ 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.6rJ 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.6rJ 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d802a1dc84f5235e787be6ce084d5ff7 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.GE1 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d802a1dc84f5235e787be6ce084d5ff7 1 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d802a1dc84f5235e787be6ce084d5ff7 1 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d802a1dc84f5235e787be6ce084d5ff7 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.GE1 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.GE1 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.GE1 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:50.580 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:50.581 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:50.581 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:50.581 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:21:50.581 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:50.581 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:50.581 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d79c75e1838bb64e22fdc772f20217ca 00:21:50.581 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:50.581 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JS2 00:21:50.581 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d79c75e1838bb64e22fdc772f20217ca 1 00:21:50.581 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d79c75e1838bb64e22fdc772f20217ca 1 00:21:50.581 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:50.581 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:50.581 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d79c75e1838bb64e22fdc772f20217ca 00:21:50.581 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:21:50.581 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:50.581 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JS2 00:21:50.581 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JS2 00:21:50.581 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.JS2 00:21:50.840 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:21:50.840 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:50.840 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:50.840 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:50.840 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:21:50.840 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:50.840 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:50.840 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=898ef56c8d722216d5abe2f638e466da41f162ff00376a87 00:21:50.840 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:50.840 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.pcj 00:21:50.840 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 898ef56c8d722216d5abe2f638e466da41f162ff00376a87 2 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 898ef56c8d722216d5abe2f638e466da41f162ff00376a87 2 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=898ef56c8d722216d5abe2f638e466da41f162ff00376a87 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.pcj 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.pcj 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.pcj 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3f6ce155b85331676ec55544ebba8a62 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4QA 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3f6ce155b85331676ec55544ebba8a62 0 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3f6ce155b85331676ec55544ebba8a62 0 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3f6ce155b85331676ec55544ebba8a62 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4QA 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4QA 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.4QA 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=785ae034295db98d8acab869548fb5380b3f2824e9fc4f810a9c950eb7c23359 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.vn2 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 785ae034295db98d8acab869548fb5380b3f2824e9fc4f810a9c950eb7c23359 3 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 785ae034295db98d8acab869548fb5380b3f2824e9fc4f810a9c950eb7c23359 3 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=785ae034295db98d8acab869548fb5380b3f2824e9fc4f810a9c950eb7c23359 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.vn2 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.vn2 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.vn2 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2436496 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2436496 ']' 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.841 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PB9 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.1Jl ]] 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1Jl 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.knt 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.6rJ ]] 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6rJ 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.GE1 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.100 18:09:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.100 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.100 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.JS2 ]] 00:21:51.100 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JS2 00:21:51.100 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.100 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.100 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.100 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:51.100 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.pcj 00:21:51.100 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.4QA ]] 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.4QA 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.vn2 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:51.101 18:09:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:21:54.384 Waiting for block devices as requested 00:21:54.643 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:54.643 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:54.643 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:54.643 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:54.901 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:54.901 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:54.901 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:55.160 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:55.160 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:55.160 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:55.418 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:55.418 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:55.418 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:55.677 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:55.677 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:55.677 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:55.936 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:21:56.503 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:56.504 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:56.504 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:56.504 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:56.504 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:56.504 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:56.504 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:56.504 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:56.504 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:21:56.504 No valid GPT data, bailing 00:21:56.504 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:56.504 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:56.504 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:21:56.763 00:21:56.763 Discovery Log Number of Records 2, Generation counter 2 00:21:56.763 =====Discovery Log Entry 0====== 00:21:56.763 trtype: rdma 00:21:56.763 adrfam: ipv4 00:21:56.763 subtype: current discovery subsystem 00:21:56.763 treq: not specified, sq flow control disable supported 00:21:56.763 portid: 1 00:21:56.763 trsvcid: 4420 00:21:56.763 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:56.763 traddr: 192.168.100.8 00:21:56.763 eflags: none 00:21:56.763 rdma_prtype: not specified 00:21:56.763 rdma_qptype: connected 00:21:56.763 rdma_cms: rdma-cm 00:21:56.763 rdma_pkey: 0x0000 00:21:56.763 =====Discovery Log Entry 1====== 00:21:56.763 trtype: rdma 00:21:56.763 adrfam: ipv4 00:21:56.763 subtype: nvme subsystem 00:21:56.763 treq: not specified, sq flow control disable supported 00:21:56.763 portid: 1 00:21:56.763 trsvcid: 4420 00:21:56.763 subnqn: nqn.2024-02.io.spdk:cnode0 00:21:56.763 traddr: 192.168.100.8 00:21:56.763 eflags: none 00:21:56.763 rdma_prtype: not specified 00:21:56.763 rdma_qptype: connected 00:21:56.763 rdma_cms: rdma-cm 00:21:56.763 rdma_pkey: 0x0000 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: ]] 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.763 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.067 nvme0n1 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: ]] 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:57.067 18:10:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:57.067 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:57.067 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:57.067 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:57.067 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.067 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.067 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.369 nvme0n1 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: ]] 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.369 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.628 nvme0n1 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: ]] 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.628 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.887 nvme0n1 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:57.887 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:21:57.888 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: ]] 00:21:57.888 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:21:57.888 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:21:57.888 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:57.888 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:57.888 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:57.888 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:57.888 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.888 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:57.888 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.888 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.888 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.147 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:58.147 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:58.147 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:58.147 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:58.147 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:58.147 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:58.147 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:58.147 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:58.147 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:58.147 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:58.147 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:58.147 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:58.147 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.147 18:10:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.147 nvme0n1 00:21:58.147 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.147 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.147 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.147 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.147 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.147 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.147 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.147 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.147 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.147 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.406 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.407 nvme0n1 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.407 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: ]] 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.666 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.926 nvme0n1 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: ]] 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.926 18:10:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.186 nvme0n1 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: ]] 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.186 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.446 nvme0n1 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: ]] 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.446 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.705 nvme0n1 00:21:59.705 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.705 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.705 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.705 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:59.705 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.705 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.705 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.705 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.705 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.705 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.965 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.225 nvme0n1 00:22:00.225 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.225 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.225 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.225 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.225 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.225 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.225 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.225 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.225 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.225 18:10:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: ]] 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.225 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.485 nvme0n1 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: ]] 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.485 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.053 nvme0n1 00:22:01.053 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.053 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.053 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:01.053 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.053 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.053 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.053 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.053 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.053 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.053 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: ]] 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.054 18:10:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.313 nvme0n1 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: ]] 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:01.313 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:01.314 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:01.314 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.314 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.572 nvme0n1 00:22:01.572 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.572 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:01.572 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:01.572 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.572 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.572 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.831 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.090 nvme0n1 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: ]] 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.090 18:10:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.659 nvme0n1 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: ]] 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.659 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.228 nvme0n1 00:22:03.228 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.228 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:03.228 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.228 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:03.228 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.228 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.228 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.228 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.228 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.228 18:10:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: ]] 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.228 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.487 nvme0n1 00:22:03.487 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.487 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:03.487 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:03.487 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.487 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.487 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: ]] 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.746 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.005 nvme0n1 00:22:04.005 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.005 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:04.005 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:04.005 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.005 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.005 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.264 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.264 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.264 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.264 18:10:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.264 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.523 nvme0n1 00:22:04.523 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.523 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:04.523 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:04.523 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.523 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.523 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: ]] 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.782 18:10:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.350 nvme0n1 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: ]] 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.350 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.918 nvme0n1 00:22:05.918 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.918 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:05.918 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:05.918 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.918 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.918 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: ]] 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.177 18:10:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.745 nvme0n1 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: ]] 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.745 18:10:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.313 nvme0n1 00:22:07.313 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.313 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:07.313 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.313 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.313 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.313 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.313 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.313 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.313 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.313 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.571 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.139 nvme0n1 00:22:08.139 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.139 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.139 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:08.139 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.139 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.139 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.139 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.139 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.139 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.139 18:10:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: ]] 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:08.139 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.140 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.399 nvme0n1 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: ]] 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.399 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.658 nvme0n1 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: ]] 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.658 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:08.659 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.659 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.659 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.659 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:08.659 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:08.659 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:08.659 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:08.659 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.659 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.659 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:08.659 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:08.659 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:08.659 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:08.659 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:08.659 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.659 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.659 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.917 nvme0n1 00:22:08.917 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.917 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.917 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.917 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:08.917 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.917 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.917 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.917 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.917 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.917 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.917 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: ]] 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.176 18:10:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.176 nvme0n1 00:22:09.176 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.176 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.176 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:09.176 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.176 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.176 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:09.435 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.436 nvme0n1 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.436 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: ]] 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.695 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.955 nvme0n1 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: ]] 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.955 18:10:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.214 nvme0n1 00:22:10.214 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.214 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.214 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.214 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.214 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: ]] 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.215 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.474 nvme0n1 00:22:10.474 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.474 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.474 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.474 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.474 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.474 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: ]] 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.475 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.733 nvme0n1 00:22:10.733 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.733 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.733 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.733 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.733 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.733 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.733 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.733 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.733 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.733 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.992 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.252 nvme0n1 00:22:11.252 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.252 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.252 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:11.252 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.252 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.252 18:10:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: ]] 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.252 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.511 nvme0n1 00:22:11.511 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.511 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.511 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:11.511 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: ]] 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.512 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.080 nvme0n1 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: ]] 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.080 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.081 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:12.081 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:12.081 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:12.081 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:12.081 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:12.081 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:12.081 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:12.081 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:12.081 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:12.081 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:12.081 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:12.081 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.081 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.081 18:10:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.340 nvme0n1 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: ]] 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.340 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.599 nvme0n1 00:22:12.599 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.599 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.599 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:12.599 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.599 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.599 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.599 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.599 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.599 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.599 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.858 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.117 nvme0n1 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: ]] 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:13.117 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:13.118 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:13.118 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.118 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.118 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.118 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:13.118 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:13.118 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:13.118 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:13.118 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.118 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.118 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:13.118 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:13.118 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:13.118 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:13.118 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:13.118 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.118 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.118 18:10:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.685 nvme0n1 00:22:13.685 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.685 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: ]] 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.686 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.945 nvme0n1 00:22:13.945 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: ]] 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.204 18:10:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.204 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.204 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:14.204 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:14.205 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:14.205 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:14.205 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.205 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.205 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:14.205 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:14.205 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:14.205 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:14.205 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:14.205 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.205 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.205 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.464 nvme0n1 00:22:14.464 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.464 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.464 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:14.464 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: ]] 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.723 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.982 nvme0n1 00:22:14.982 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.982 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.982 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.982 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:14.982 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.241 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.241 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.241 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.241 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.241 18:10:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:15.241 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:15.242 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:15.242 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.242 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.242 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.242 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:15.242 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:15.242 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:15.242 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:15.242 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:15.242 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:15.242 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:15.242 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:15.242 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:15.242 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:15.242 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:15.242 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:15.242 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.242 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.501 nvme0n1 00:22:15.501 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.501 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:15.501 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.501 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.501 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.501 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: ]] 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.760 18:10:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.328 nvme0n1 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:16.328 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: ]] 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.329 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.897 nvme0n1 00:22:16.897 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.897 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.897 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.897 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:16.897 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.897 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.160 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.160 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.160 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.160 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.160 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.160 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:17.160 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:17.160 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:17.160 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: ]] 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.161 18:10:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.727 nvme0n1 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: ]] 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:17.727 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:17.728 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:17.728 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:17.728 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:17.728 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:17.728 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:17.728 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:17.728 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:17.728 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.728 18:10:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.296 nvme0n1 00:22:18.296 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.296 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:18.296 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.296 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:18.296 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.296 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.555 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.123 nvme0n1 00:22:19.123 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.123 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.123 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:19.123 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.123 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.123 18:10:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:19.123 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: ]] 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.124 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.383 nvme0n1 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: ]] 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.383 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.643 nvme0n1 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:19.643 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: ]] 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.902 nvme0n1 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.902 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: ]] 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.162 18:10:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.162 nvme0n1 00:22:20.162 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.162 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.162 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.162 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:20.162 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.421 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.680 nvme0n1 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: ]] 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:20.680 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.681 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.681 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:20.681 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:20.681 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:20.681 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:20.681 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:20.681 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.681 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.681 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.940 nvme0n1 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: ]] 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.940 18:10:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.199 nvme0n1 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:21.199 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: ]] 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.200 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.459 nvme0n1 00:22:21.459 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.459 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.459 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.459 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:21.459 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.459 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.459 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.459 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.459 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.459 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: ]] 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.718 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.977 nvme0n1 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.977 18:10:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.236 nvme0n1 00:22:22.236 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.236 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.236 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.236 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.236 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.236 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.236 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.236 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.236 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.236 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: ]] 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.237 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.496 nvme0n1 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:22.496 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: ]] 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.497 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.065 nvme0n1 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: ]] 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.065 18:10:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.324 nvme0n1 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: ]] 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.324 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:23.325 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:23.325 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:23.325 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.325 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.325 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:23.325 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:23.325 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:23.325 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:23.325 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:23.325 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:23.325 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.325 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.584 nvme0n1 00:22:23.584 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.584 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.584 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.584 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.584 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.843 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.103 nvme0n1 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: ]] 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.103 18:10:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.103 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.103 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:24.103 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:24.103 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:24.103 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:24.103 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.103 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.103 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:24.103 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:24.103 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:24.103 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:24.103 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:24.103 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.103 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.103 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.826 nvme0n1 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: ]] 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.826 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.085 nvme0n1 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: ]] 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.085 18:10:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.085 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.085 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:25.085 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:25.085 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:25.085 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:25.085 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.085 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.085 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:25.085 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:25.085 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:25.085 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:25.085 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:25.085 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.085 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.085 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.652 nvme0n1 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: ]] 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.652 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.219 nvme0n1 00:22:26.219 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.219 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.219 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.219 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:26.219 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.219 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.219 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.219 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.219 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.219 18:10:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.219 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.219 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:26.219 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:26.219 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:26.219 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:26.219 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:26.219 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:26.219 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:26.219 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:26.219 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:26.219 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:26.219 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:26.219 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:26.219 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.220 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.478 nvme0n1 00:22:26.478 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.478 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.478 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.478 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:26.478 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.478 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTFjMTkxNmIwNzBmZTI2MzA4ZWJhNzBlMzQ3YWZkZGM7tbrf: 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: ]] 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDY0NDNjNGYxYzVmODFiYzQ5NTExOTNkY2VlMzNiNTMwODczMTZkMDFjYWVhNzY0MzA4ZjBmNzFmNWVlNTA5NR7xyMI=: 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.737 18:10:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.304 nvme0n1 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: ]] 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.304 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.871 nvme0n1 00:22:27.871 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.871 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.871 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:27.871 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.871 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: ]] 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.130 18:10:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.697 nvme0n1 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODk4ZWY1NmM4ZDcyMjIxNmQ1YWJlMmY2MzhlNDY2ZGE0MWYxNjJmZjAwMzc2YTg3L5/7AA==: 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: ]] 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2Y2Y2UxNTViODUzMzE2NzZlYzU1NTQ0ZWJiYThhNjJad4Dw: 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:28.697 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:28.698 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:28.698 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.698 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.698 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.698 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:28.698 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:28.698 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:28.698 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:28.698 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.698 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.698 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:28.698 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:28.698 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:28.698 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:28.698 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:28.698 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:28.698 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.698 18:10:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.264 nvme0n1 00:22:29.264 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.264 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.264 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.264 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:29.264 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.264 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1YWUwMzQyOTVkYjk4ZDhhY2FiODY5NTQ4ZmI1MzgwYjNmMjgyNGU5ZmM0ZjgxMGE5Yzk1MGViN2MyMzM1OXDxBjg=: 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.523 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.091 nvme0n1 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:30.091 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: ]] 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.092 18:10:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.092 request: 00:22:30.092 { 00:22:30.092 "name": "nvme0", 00:22:30.092 "trtype": "rdma", 00:22:30.092 "traddr": "192.168.100.8", 00:22:30.092 "adrfam": "ipv4", 00:22:30.092 "trsvcid": "4420", 00:22:30.092 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:30.092 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:30.092 "prchk_reftag": false, 00:22:30.092 "prchk_guard": false, 00:22:30.092 "hdgst": false, 00:22:30.092 "ddgst": false, 00:22:30.092 "allow_unrecognized_csi": false, 00:22:30.092 "method": "bdev_nvme_attach_controller", 00:22:30.092 "req_id": 1 00:22:30.092 } 00:22:30.092 Got JSON-RPC error response 00:22:30.092 response: 00:22:30.092 { 00:22:30.092 "code": -5, 00:22:30.092 "message": "Input/output error" 00:22:30.092 } 00:22:30.092 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:30.092 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:30.092 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:30.092 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:30.092 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:30.092 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.092 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.092 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:22:30.092 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.351 request: 00:22:30.351 { 00:22:30.351 "name": "nvme0", 00:22:30.351 "trtype": "rdma", 00:22:30.351 "traddr": "192.168.100.8", 00:22:30.351 "adrfam": "ipv4", 00:22:30.351 "trsvcid": "4420", 00:22:30.351 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:30.351 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:30.351 "prchk_reftag": false, 00:22:30.351 "prchk_guard": false, 00:22:30.351 "hdgst": false, 00:22:30.351 "ddgst": false, 00:22:30.351 "dhchap_key": "key2", 00:22:30.351 "allow_unrecognized_csi": false, 00:22:30.351 "method": "bdev_nvme_attach_controller", 00:22:30.351 "req_id": 1 00:22:30.351 } 00:22:30.351 Got JSON-RPC error response 00:22:30.351 response: 00:22:30.351 { 00:22:30.351 "code": -5, 00:22:30.351 "message": "Input/output error" 00:22:30.351 } 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:30.351 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:30.352 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:30.352 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.352 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.352 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:30.352 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:30.352 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:30.352 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:30.352 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:30.352 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:30.352 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:30.352 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:30.352 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:30.352 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.352 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:30.352 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.352 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:30.352 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.352 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.611 request: 00:22:30.611 { 00:22:30.611 "name": "nvme0", 00:22:30.611 "trtype": "rdma", 00:22:30.611 "traddr": "192.168.100.8", 00:22:30.611 "adrfam": "ipv4", 00:22:30.611 "trsvcid": "4420", 00:22:30.611 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:30.611 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:30.611 "prchk_reftag": false, 00:22:30.611 "prchk_guard": false, 00:22:30.611 "hdgst": false, 00:22:30.611 "ddgst": false, 00:22:30.611 "dhchap_key": "key1", 00:22:30.611 "dhchap_ctrlr_key": "ckey2", 00:22:30.611 "allow_unrecognized_csi": false, 00:22:30.611 "method": "bdev_nvme_attach_controller", 00:22:30.611 "req_id": 1 00:22:30.611 } 00:22:30.611 Got JSON-RPC error response 00:22:30.611 response: 00:22:30.611 { 00:22:30.611 "code": -5, 00:22:30.611 "message": "Input/output error" 00:22:30.611 } 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.611 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.870 nvme0n1 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: ]] 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.870 request: 00:22:30.870 { 00:22:30.870 "name": "nvme0", 00:22:30.870 "dhchap_key": "key1", 00:22:30.870 "dhchap_ctrlr_key": "ckey2", 00:22:30.870 "method": "bdev_nvme_set_keys", 00:22:30.870 "req_id": 1 00:22:30.870 } 00:22:30.870 Got JSON-RPC error response 00:22:30.870 response: 00:22:30.870 { 00:22:30.870 "code": -13, 00:22:30.870 "message": "Permission denied" 00:22:30.870 } 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:22:30.870 18:10:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:22:32.246 18:10:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.246 18:10:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:22:32.246 18:10:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.246 18:10:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.246 18:10:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.246 18:10:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:22:32.246 18:10:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OThmMGUzZjRhZDk2YTQzMTgxMTVjNTA0NTc5M2M3YTlmNTQ5N2VlOWQ2MGViYTYwkvRpMA==: 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: ]] 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGIyZTMxZmI0MzJjODFlNDM0ZmIyYjEzMjAzMDE5ZmE4YTE0MjkwNjUxNTNiNjk4a/HAtg==: 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.182 18:10:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.182 nvme0n1 00:22:33.182 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.182 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:33.182 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.182 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:33.182 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.182 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:33.182 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:33.182 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:33.182 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:33.182 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:33.183 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDgwMmExZGM4NGY1MjM1ZTc4N2JlNmNlMDg0ZDVmZjctuyCu: 00:22:33.183 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: ]] 00:22:33.183 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc5Yzc1ZTE4MzhiYjY0ZTIyZmRjNzcyZjIwMjE3Y2GTcLpl: 00:22:33.183 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:33.183 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:33.183 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:33.183 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:33.183 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:33.183 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:33.183 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:33.183 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:33.183 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.183 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.441 request: 00:22:33.441 { 00:22:33.441 "name": "nvme0", 00:22:33.441 "dhchap_key": "key2", 00:22:33.441 "dhchap_ctrlr_key": "ckey1", 00:22:33.441 "method": "bdev_nvme_set_keys", 00:22:33.441 "req_id": 1 00:22:33.441 } 00:22:33.441 Got JSON-RPC error response 00:22:33.441 response: 00:22:33.441 { 00:22:33.441 "code": -13, 00:22:33.441 "message": "Permission denied" 00:22:33.441 } 00:22:33.441 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:33.441 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:33.441 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:33.441 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:33.441 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:33.441 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.441 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:22:33.441 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.441 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.441 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.441 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:22:33.441 18:10:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:22:34.377 18:10:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.377 18:10:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:22:34.377 18:10:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.377 18:10:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.377 18:10:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.377 18:10:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:22:34.377 18:10:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:35.754 rmmod nvme_rdma 00:22:35.754 rmmod nvme_fabrics 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2436496 ']' 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2436496 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2436496 ']' 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2436496 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2436496 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2436496' 00:22:35.754 killing process with pid 2436496 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2436496 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2436496 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:22:35.754 18:10:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:22:39.948 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:39.948 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:39.948 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:39.948 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:39.948 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:39.948 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:39.948 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:39.948 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:39.948 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:39.948 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:39.948 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:39.948 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:39.948 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:39.948 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:39.948 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:39.948 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:41.327 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:22:41.586 18:10:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.PB9 /tmp/spdk.key-null.knt /tmp/spdk.key-sha256.GE1 /tmp/spdk.key-sha384.pcj /tmp/spdk.key-sha512.vn2 /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:22:41.586 18:10:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:22:44.877 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:22:44.877 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:22:44.877 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:22:44.877 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:22:44.877 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:22:44.877 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:22:44.877 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:22:44.877 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:22:44.877 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:22:44.877 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:22:44.877 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:22:44.877 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:22:44.877 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:22:44.877 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:22:44.877 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:22:44.877 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:22:44.877 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:22:44.877 00:22:44.877 real 1m2.987s 00:22:44.877 user 0m56.178s 00:22:44.877 sys 0m16.358s 00:22:44.877 18:10:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:44.877 18:10:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.877 ************************************ 00:22:44.877 END TEST nvmf_auth_host 00:22:44.877 ************************************ 00:22:44.877 18:10:52 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:22:44.877 18:10:52 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:22:44.878 18:10:52 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:22:44.878 18:10:52 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:22:44.878 18:10:52 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:22:44.878 18:10:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:44.878 18:10:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:44.878 18:10:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.878 ************************************ 00:22:44.878 START TEST nvmf_bdevperf 00:22:44.878 ************************************ 00:22:44.878 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:22:45.137 * Looking for test storage... 00:22:45.137 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:45.137 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:22:45.138 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:45.138 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:45.138 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:45.138 18:10:52 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:45.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.138 --rc genhtml_branch_coverage=1 00:22:45.138 --rc genhtml_function_coverage=1 00:22:45.138 --rc genhtml_legend=1 00:22:45.138 --rc geninfo_all_blocks=1 00:22:45.138 --rc geninfo_unexecuted_blocks=1 00:22:45.138 00:22:45.138 ' 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:45.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.138 --rc genhtml_branch_coverage=1 00:22:45.138 --rc genhtml_function_coverage=1 00:22:45.138 --rc genhtml_legend=1 00:22:45.138 --rc geninfo_all_blocks=1 00:22:45.138 --rc geninfo_unexecuted_blocks=1 00:22:45.138 00:22:45.138 ' 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:45.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.138 --rc genhtml_branch_coverage=1 00:22:45.138 --rc genhtml_function_coverage=1 00:22:45.138 --rc genhtml_legend=1 00:22:45.138 --rc geninfo_all_blocks=1 00:22:45.138 --rc geninfo_unexecuted_blocks=1 00:22:45.138 00:22:45.138 ' 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:45.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.138 --rc genhtml_branch_coverage=1 00:22:45.138 --rc genhtml_function_coverage=1 00:22:45.138 --rc genhtml_legend=1 00:22:45.138 --rc geninfo_all_blocks=1 00:22:45.138 --rc geninfo_unexecuted_blocks=1 00:22:45.138 00:22:45.138 ' 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:45.138 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:45.138 18:10:53 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:53.260 18:10:59 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:53.260 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:53.260 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:53.260 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:53.260 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:53.260 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:53.261 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:53.261 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:53.261 altname enp217s0f0np0 00:22:53.261 altname ens818f0np0 00:22:53.261 inet 192.168.100.8/24 scope global mlx_0_0 00:22:53.261 valid_lft forever preferred_lft forever 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:53.261 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:53.261 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:53.261 altname enp217s0f1np1 00:22:53.261 altname ens818f1np1 00:22:53.261 inet 192.168.100.9/24 scope global mlx_0_1 00:22:53.261 valid_lft forever preferred_lft forever 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:53.261 192.168.100.9' 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:53.261 192.168.100.9' 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:53.261 192.168.100.9' 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2451758 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2451758 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2451758 ']' 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.261 18:11:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:53.261 [2024-12-09 18:11:00.326085] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:22:53.261 [2024-12-09 18:11:00.326141] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.261 [2024-12-09 18:11:00.420452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:53.261 [2024-12-09 18:11:00.461616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.261 [2024-12-09 18:11:00.461657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.261 [2024-12-09 18:11:00.461666] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.261 [2024-12-09 18:11:00.461674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.261 [2024-12-09 18:11:00.461681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.261 [2024-12-09 18:11:00.463220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.261 [2024-12-09 18:11:00.463328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.261 [2024-12-09 18:11:00.463329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:53.261 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.261 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:22:53.261 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:53.261 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:53.261 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:53.261 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.262 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:53.262 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.262 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:53.520 [2024-12-09 18:11:01.249839] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa9b0c0/0xa9f5b0) succeed. 00:22:53.520 [2024-12-09 18:11:01.258861] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa9c6b0/0xae0c50) succeed. 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:53.520 Malloc0 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:53.520 [2024-12-09 18:11:01.411899] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.520 { 00:22:53.520 "params": { 00:22:53.520 "name": "Nvme$subsystem", 00:22:53.520 "trtype": "$TEST_TRANSPORT", 00:22:53.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.520 "adrfam": "ipv4", 00:22:53.520 "trsvcid": "$NVMF_PORT", 00:22:53.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.520 "hdgst": ${hdgst:-false}, 00:22:53.520 "ddgst": ${ddgst:-false} 00:22:53.520 }, 00:22:53.520 "method": "bdev_nvme_attach_controller" 00:22:53.520 } 00:22:53.520 EOF 00:22:53.520 )") 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:22:53.520 18:11:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:53.520 "params": { 00:22:53.520 "name": "Nvme1", 00:22:53.520 "trtype": "rdma", 00:22:53.520 "traddr": "192.168.100.8", 00:22:53.520 "adrfam": "ipv4", 00:22:53.520 "trsvcid": "4420", 00:22:53.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.520 "hdgst": false, 00:22:53.520 "ddgst": false 00:22:53.520 }, 00:22:53.520 "method": "bdev_nvme_attach_controller" 00:22:53.520 }' 00:22:53.520 [2024-12-09 18:11:01.463923] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:22:53.520 [2024-12-09 18:11:01.463979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2452046 ] 00:22:53.778 [2024-12-09 18:11:01.546741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.778 [2024-12-09 18:11:01.598968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.036 Running I/O for 1 seconds... 00:22:54.970 17534.00 IOPS, 68.49 MiB/s 00:22:54.970 Latency(us) 00:22:54.970 [2024-12-09T17:11:02.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.970 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:54.970 Verification LBA range: start 0x0 length 0x4000 00:22:54.970 Nvme1n1 : 1.01 17536.61 68.50 0.00 0.00 7259.66 2162.69 20342.37 00:22:54.970 [2024-12-09T17:11:02.949Z] =================================================================================================================== 00:22:54.970 [2024-12-09T17:11:02.949Z] Total : 17536.61 68.50 0.00 0.00 7259.66 2162.69 20342.37 00:22:55.228 18:11:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2452309 00:22:55.228 18:11:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:22:55.228 18:11:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:22:55.228 18:11:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:22:55.228 18:11:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:22:55.228 18:11:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:22:55.228 18:11:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:55.228 18:11:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:55.228 { 00:22:55.228 "params": { 00:22:55.228 "name": "Nvme$subsystem", 00:22:55.228 "trtype": "$TEST_TRANSPORT", 00:22:55.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.228 "adrfam": "ipv4", 00:22:55.228 "trsvcid": "$NVMF_PORT", 00:22:55.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.228 "hdgst": ${hdgst:-false}, 00:22:55.228 "ddgst": ${ddgst:-false} 00:22:55.228 }, 00:22:55.228 "method": "bdev_nvme_attach_controller" 00:22:55.228 } 00:22:55.228 EOF 00:22:55.228 )") 00:22:55.228 18:11:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:22:55.228 18:11:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:22:55.228 18:11:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:22:55.228 18:11:03 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:55.228 "params": { 00:22:55.228 "name": "Nvme1", 00:22:55.228 "trtype": "rdma", 00:22:55.228 "traddr": "192.168.100.8", 00:22:55.228 "adrfam": "ipv4", 00:22:55.228 "trsvcid": "4420", 00:22:55.228 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.228 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:55.228 "hdgst": false, 00:22:55.228 "ddgst": false 00:22:55.228 }, 00:22:55.228 "method": "bdev_nvme_attach_controller" 00:22:55.228 }' 00:22:55.228 [2024-12-09 18:11:03.033036] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:22:55.228 [2024-12-09 18:11:03.033089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2452309 ] 00:22:55.228 [2024-12-09 18:11:03.124634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.228 [2024-12-09 18:11:03.160529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.486 Running I/O for 15 seconds... 00:22:57.798 18023.00 IOPS, 70.40 MiB/s [2024-12-09T17:11:06.035Z] 18112.00 IOPS, 70.75 MiB/s [2024-12-09T17:11:06.035Z] 18:11:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2451758 00:22:58.056 18:11:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:22:59.193 16000.00 IOPS, 62.50 MiB/s [2024-12-09T17:11:07.172Z] [2024-12-09 18:11:07.014495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.193 [2024-12-09 18:11:07.014530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.193 [2024-12-09 18:11:07.014549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.193 [2024-12-09 18:11:07.014559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.193 [2024-12-09 18:11:07.014571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.193 [2024-12-09 18:11:07.014580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.193 [2024-12-09 18:11:07.014591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.193 [2024-12-09 18:11:07.014600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.193 [2024-12-09 18:11:07.014610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.193 [2024-12-09 18:11:07.014619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.193 [2024-12-09 18:11:07.014634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.193 [2024-12-09 18:11:07.014642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.193 [2024-12-09 18:11:07.014653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.193 [2024-12-09 18:11:07.014662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.193 [2024-12-09 18:11:07.014672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.193 [2024-12-09 18:11:07.014681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.193 [2024-12-09 18:11:07.014691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:122240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.193 [2024-12-09 18:11:07.014700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.193 [2024-12-09 18:11:07.014709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:122248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.193 [2024-12-09 18:11:07.014719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.014729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.014737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.014747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.014756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.014766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.014776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.014786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.014796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.014806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:122288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.014815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.014825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:122296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.014833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.014844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:122304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.014852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.014862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.014877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.014887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:122320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.014896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.014906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.014915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.014925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:122336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.014934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.014944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.014956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.014966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.014975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.014985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:122360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.014994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:122376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:122464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:122480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.194 [2024-12-09 18:11:07.015463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.194 [2024-12-09 18:11:07.015474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.015987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.015995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.016006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.016014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.016024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.016033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.016044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.016053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.016063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.016071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.016082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.016090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.016100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.016109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.016119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.016128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.016138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.016146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.016156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.016164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.016175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.016183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.016193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.195 [2024-12-09 18:11:07.016201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.016212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x182600 00:22:59.195 [2024-12-09 18:11:07.016221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.195 [2024-12-09 18:11:07.016232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x182600 00:22:59.196 [2024-12-09 18:11:07.016929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.196 [2024-12-09 18:11:07.016939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x182600 00:22:59.197 [2024-12-09 18:11:07.016952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:87508000 sqhd:7210 p:0 m:0 dnr:0 00:22:59.197 [2024-12-09 18:11:07.018960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.197 [2024-12-09 18:11:07.018974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.197 [2024-12-09 18:11:07.018983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122168 len:8 PRP1 0x0 PRP2 0x0 00:22:59.197 [2024-12-09 18:11:07.018992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.197 [2024-12-09 18:11:07.021838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:59.197 [2024-12-09 18:11:07.036149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:22:59.197 [2024-12-09 18:11:07.039389] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:59.197 [2024-12-09 18:11:07.039410] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:59.197 [2024-12-09 18:11:07.039418] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:23:00.392 12000.00 IOPS, 46.88 MiB/s [2024-12-09T17:11:08.371Z] [2024-12-09 18:11:08.043435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:23:00.392 [2024-12-09 18:11:08.043494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:00.392 [2024-12-09 18:11:08.044091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:00.392 [2024-12-09 18:11:08.044102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:00.392 [2024-12-09 18:11:08.044111] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:23:00.392 [2024-12-09 18:11:08.044122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:00.392 [2024-12-09 18:11:08.048326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:00.392 [2024-12-09 18:11:08.051244] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:00.392 [2024-12-09 18:11:08.051264] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:00.392 [2024-12-09 18:11:08.051272] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:23:01.328 9600.00 IOPS, 37.50 MiB/s [2024-12-09T17:11:09.307Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2451758 Killed "${NVMF_APP[@]}" "$@" 00:23:01.328 18:11:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:23:01.328 18:11:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:01.328 18:11:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:01.328 18:11:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:01.328 18:11:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:01.328 18:11:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2453293 00:23:01.328 18:11:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:01.328 18:11:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2453293 00:23:01.328 18:11:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2453293 ']' 00:23:01.328 18:11:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.328 18:11:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:01.328 18:11:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.328 18:11:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:01.328 18:11:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:01.328 [2024-12-09 18:11:09.055043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:23:01.328 [2024-12-09 18:11:09.055069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:01.328 [2024-12-09 18:11:09.055257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:01.328 [2024-12-09 18:11:09.055269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:01.328 [2024-12-09 18:11:09.055278] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:23:01.328 [2024-12-09 18:11:09.055290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:01.328 [2024-12-09 18:11:09.055827] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:23:01.328 [2024-12-09 18:11:09.055872] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.328 [2024-12-09 18:11:09.060546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:01.328 [2024-12-09 18:11:09.063139] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:01.328 [2024-12-09 18:11:09.063160] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:01.328 [2024-12-09 18:11:09.063168] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:23:01.328 [2024-12-09 18:11:09.150382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:01.328 [2024-12-09 18:11:09.191061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.328 [2024-12-09 18:11:09.191100] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.328 [2024-12-09 18:11:09.191109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.328 [2024-12-09 18:11:09.191117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.328 [2024-12-09 18:11:09.191139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.328 [2024-12-09 18:11:09.192621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.328 [2024-12-09 18:11:09.192730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.328 [2024-12-09 18:11:09.192732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.152 8000.00 IOPS, 31.25 MiB/s [2024-12-09T17:11:10.131Z] 18:11:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.152 18:11:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:23:02.152 18:11:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:02.152 18:11:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:02.152 18:11:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:02.152 18:11:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.152 18:11:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:02.152 18:11:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.152 18:11:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:02.152 [2024-12-09 18:11:09.975043] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x59a0c0/0x59e5b0) succeed. 00:23:02.152 [2024-12-09 18:11:09.983993] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x59b6b0/0x5dfc50) succeed. 00:23:02.152 [2024-12-09 18:11:10.067222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:23:02.152 [2024-12-09 18:11:10.067262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:02.152 [2024-12-09 18:11:10.067439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:02.152 [2024-12-09 18:11:10.067450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:02.152 [2024-12-09 18:11:10.067459] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:23:02.152 [2024-12-09 18:11:10.067472] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:02.152 [2024-12-09 18:11:10.076547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:02.152 [2024-12-09 18:11:10.079262] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:02.152 [2024-12-09 18:11:10.079286] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:02.152 [2024-12-09 18:11:10.079294] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:23:02.152 18:11:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.152 18:11:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:02.152 18:11:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.152 18:11:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:02.152 Malloc0 00:23:02.152 18:11:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.152 18:11:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:02.152 18:11:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.152 18:11:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:02.152 18:11:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.152 18:11:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:02.152 18:11:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.152 18:11:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:02.152 18:11:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.152 18:11:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:02.152 18:11:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.152 18:11:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:02.152 [2024-12-09 18:11:10.129878] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:02.410 18:11:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.410 18:11:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2452309 00:23:03.343 6857.14 IOPS, 26.79 MiB/s [2024-12-09T17:11:11.322Z] [2024-12-09 18:11:11.083364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:23:03.343 [2024-12-09 18:11:11.083391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:03.343 [2024-12-09 18:11:11.083566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:03.343 [2024-12-09 18:11:11.083576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:03.343 [2024-12-09 18:11:11.083586] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:23:03.343 [2024-12-09 18:11:11.083598] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:03.343 [2024-12-09 18:11:11.093450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:03.343 [2024-12-09 18:11:11.131254] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:23:04.652 6512.00 IOPS, 25.44 MiB/s [2024-12-09T17:11:13.566Z] 7812.67 IOPS, 30.52 MiB/s [2024-12-09T17:11:14.501Z] 8853.60 IOPS, 34.58 MiB/s [2024-12-09T17:11:15.434Z] 9704.73 IOPS, 37.91 MiB/s [2024-12-09T17:11:16.809Z] 10415.92 IOPS, 40.69 MiB/s [2024-12-09T17:11:17.744Z] 11017.15 IOPS, 43.04 MiB/s [2024-12-09T17:11:18.679Z] 11533.64 IOPS, 45.05 MiB/s [2024-12-09T17:11:18.679Z] 11979.73 IOPS, 46.80 MiB/s 00:23:10.700 Latency(us) 00:23:10.700 [2024-12-09T17:11:18.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.700 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:10.700 Verification LBA range: start 0x0 length 0x4000 00:23:10.700 Nvme1n1 : 15.01 11981.86 46.80 13841.85 0.00 4936.57 445.64 1033476.51 00:23:10.700 [2024-12-09T17:11:18.679Z] =================================================================================================================== 00:23:10.700 [2024-12-09T17:11:18.679Z] Total : 11981.86 46.80 13841.85 0.00 4936.57 445.64 1033476.51 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:10.700 rmmod nvme_rdma 00:23:10.700 rmmod nvme_fabrics 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2453293 ']' 00:23:10.700 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2453293 00:23:10.701 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2453293 ']' 00:23:10.701 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2453293 00:23:10.701 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:23:10.701 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.701 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2453293 00:23:10.960 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:10.960 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:10.960 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2453293' 00:23:10.960 killing process with pid 2453293 00:23:10.960 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2453293 00:23:10.960 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2453293 00:23:10.960 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:10.960 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:23:10.960 00:23:10.960 real 0m26.128s 00:23:10.960 user 1m4.792s 00:23:10.960 sys 0m6.825s 00:23:10.960 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:10.960 18:11:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:10.960 ************************************ 00:23:10.960 END TEST nvmf_bdevperf 00:23:10.960 ************************************ 00:23:11.219 18:11:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:23:11.219 18:11:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:11.219 18:11:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.219 18:11:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.219 ************************************ 00:23:11.219 START TEST nvmf_target_disconnect 00:23:11.219 ************************************ 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:23:11.219 * Looking for test storage... 00:23:11.219 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.219 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:11.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.479 --rc genhtml_branch_coverage=1 00:23:11.479 --rc genhtml_function_coverage=1 00:23:11.479 --rc genhtml_legend=1 00:23:11.479 --rc geninfo_all_blocks=1 00:23:11.479 --rc geninfo_unexecuted_blocks=1 00:23:11.479 00:23:11.479 ' 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:11.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.479 --rc genhtml_branch_coverage=1 00:23:11.479 --rc genhtml_function_coverage=1 00:23:11.479 --rc genhtml_legend=1 00:23:11.479 --rc geninfo_all_blocks=1 00:23:11.479 --rc geninfo_unexecuted_blocks=1 00:23:11.479 00:23:11.479 ' 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:11.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.479 --rc genhtml_branch_coverage=1 00:23:11.479 --rc genhtml_function_coverage=1 00:23:11.479 --rc genhtml_legend=1 00:23:11.479 --rc geninfo_all_blocks=1 00:23:11.479 --rc geninfo_unexecuted_blocks=1 00:23:11.479 00:23:11.479 ' 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:11.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.479 --rc genhtml_branch_coverage=1 00:23:11.479 --rc genhtml_function_coverage=1 00:23:11.479 --rc genhtml_legend=1 00:23:11.479 --rc geninfo_all_blocks=1 00:23:11.479 --rc geninfo_unexecuted_blocks=1 00:23:11.479 00:23:11.479 ' 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.479 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:11.480 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:23:11.480 18:11:19 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:19.606 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:19.606 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:19.606 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:19.607 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:19.607 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:19.607 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:19.607 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:19.607 altname enp217s0f0np0 00:23:19.607 altname ens818f0np0 00:23:19.607 inet 192.168.100.8/24 scope global mlx_0_0 00:23:19.607 valid_lft forever preferred_lft forever 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:19.607 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:19.607 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:19.607 altname enp217s0f1np1 00:23:19.607 altname ens818f1np1 00:23:19.607 inet 192.168.100.9/24 scope global mlx_0_1 00:23:19.607 valid_lft forever preferred_lft forever 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:23:19.607 192.168.100.9' 00:23:19.607 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:23:19.607 192.168.100.9' 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:23:19.608 192.168.100.9' 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:19.608 ************************************ 00:23:19.608 START TEST nvmf_target_disconnect_tc1 00:23:19.608 ************************************ 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:23:19.608 18:11:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:19.608 [2024-12-09 18:11:26.711515] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:19.608 [2024-12-09 18:11:26.711579] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:19.608 [2024-12-09 18:11:26.711587] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:23:19.867 [2024-12-09 18:11:27.715635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:23:19.867 [2024-12-09 18:11:27.715708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:23:19.867 [2024-12-09 18:11:27.715735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:23:19.867 [2024-12-09 18:11:27.715758] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:19.867 [2024-12-09 18:11:27.715766] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:23:19.867 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:23:19.867 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:23:19.867 Initializing NVMe Controllers 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:19.867 00:23:19.867 real 0m1.158s 00:23:19.867 user 0m0.910s 00:23:19.867 sys 0m0.237s 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.867 ************************************ 00:23:19.867 END TEST nvmf_target_disconnect_tc1 00:23:19.867 ************************************ 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:19.867 ************************************ 00:23:19.867 START TEST nvmf_target_disconnect_tc2 00:23:19.867 ************************************ 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2458462 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2458462 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2458462 ']' 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.867 18:11:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:20.126 [2024-12-09 18:11:27.863179] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:23:20.126 [2024-12-09 18:11:27.863232] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.126 [2024-12-09 18:11:27.953001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:20.126 [2024-12-09 18:11:27.992579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.126 [2024-12-09 18:11:27.992620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.126 [2024-12-09 18:11:27.992630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.126 [2024-12-09 18:11:27.992639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.126 [2024-12-09 18:11:27.992646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.126 [2024-12-09 18:11:27.994348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:20.126 [2024-12-09 18:11:27.994462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:20.126 [2024-12-09 18:11:27.994482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:23:20.126 [2024-12-09 18:11:27.994484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:20.126 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.126 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:20.126 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:20.126 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.126 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:20.384 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.384 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:20.384 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.384 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:20.384 Malloc0 00:23:20.384 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:20.385 [2024-12-09 18:11:28.198201] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a0fa30/0x1a1b790) succeed. 00:23:20.385 [2024-12-09 18:11:28.207798] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a110c0/0x1a5ce30) succeed. 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:20.385 [2024-12-09 18:11:28.352109] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.385 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:20.642 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.642 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2458660 00:23:20.642 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:23:20.642 18:11:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:22.535 18:11:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2458462 00:23:22.535 18:11:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Write completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Write completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Write completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Write completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Write completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Write completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Write completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Write completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Read completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 Write completed with error (sct=0, sc=8) 00:23:23.910 starting I/O failed 00:23:23.910 [2024-12-09 18:11:31.572700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:24.478 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2458462 Killed "${NVMF_APP[@]}" "$@" 00:23:24.478 18:11:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:23:24.478 18:11:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:23:24.478 18:11:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:24.478 18:11:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:24.478 18:11:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:24.478 18:11:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2459287 00:23:24.478 18:11:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2459287 00:23:24.478 18:11:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:23:24.478 18:11:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2459287 ']' 00:23:24.478 18:11:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.478 18:11:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.478 18:11:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.478 18:11:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.478 18:11:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:24.478 [2024-12-09 18:11:32.435453] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:23:24.478 [2024-12-09 18:11:32.435500] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.737 [2024-12-09 18:11:32.526501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:24.737 [2024-12-09 18:11:32.564284] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.737 [2024-12-09 18:11:32.564323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.737 [2024-12-09 18:11:32.564333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.737 [2024-12-09 18:11:32.564341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.737 [2024-12-09 18:11:32.564348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.737 [2024-12-09 18:11:32.565997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:24.737 [2024-12-09 18:11:32.566108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:24.737 [2024-12-09 18:11:32.566215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:24.737 [2024-12-09 18:11:32.566217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:23:24.737 Read completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Read completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Read completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Read completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Read completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Read completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Read completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Read completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Read completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Read completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Read completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Read completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 Write completed with error (sct=0, sc=8) 00:23:24.737 starting I/O failed 00:23:24.737 [2024-12-09 18:11:32.577747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:24.737 [2024-12-09 18:11:32.579392] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:24.737 [2024-12-09 18:11:32.579415] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:24.737 [2024-12-09 18:11:32.579424] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:25.303 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.303 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:25.303 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:25.303 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.303 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:25.561 Malloc0 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:25.561 [2024-12-09 18:11:33.380658] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2038a30/0x2044790) succeed. 00:23:25.561 [2024-12-09 18:11:33.390556] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x203a0c0/0x2085e30) succeed. 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:25.561 [2024-12-09 18:11:33.534524] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:25.561 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.562 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:25.820 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.820 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:25.820 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.820 18:11:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2458660 00:23:25.820 [2024-12-09 18:11:33.583408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:25.820 qpair failed and we were unable to recover it. 00:23:25.820 [2024-12-09 18:11:33.588744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:25.820 [2024-12-09 18:11:33.588796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:25.820 [2024-12-09 18:11:33.588817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:25.820 [2024-12-09 18:11:33.588828] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:25.820 [2024-12-09 18:11:33.588837] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:25.820 [2024-12-09 18:11:33.598964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:25.820 qpair failed and we were unable to recover it. 00:23:25.820 [2024-12-09 18:11:33.608599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:25.820 [2024-12-09 18:11:33.608642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:25.820 [2024-12-09 18:11:33.608661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:25.820 [2024-12-09 18:11:33.608671] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:25.820 [2024-12-09 18:11:33.608680] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:25.820 [2024-12-09 18:11:33.618858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:25.820 qpair failed and we were unable to recover it. 00:23:25.820 [2024-12-09 18:11:33.628683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:25.820 [2024-12-09 18:11:33.628726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:25.820 [2024-12-09 18:11:33.628744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:25.820 [2024-12-09 18:11:33.628754] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:25.820 [2024-12-09 18:11:33.628762] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:25.820 [2024-12-09 18:11:33.639037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:25.820 qpair failed and we were unable to recover it. 00:23:25.820 [2024-12-09 18:11:33.648829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:25.821 [2024-12-09 18:11:33.648872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:25.821 [2024-12-09 18:11:33.648890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:25.821 [2024-12-09 18:11:33.648900] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:25.821 [2024-12-09 18:11:33.648908] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:25.821 [2024-12-09 18:11:33.659023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:25.821 qpair failed and we were unable to recover it. 00:23:25.821 [2024-12-09 18:11:33.668763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:25.821 [2024-12-09 18:11:33.668804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:25.821 [2024-12-09 18:11:33.668822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:25.821 [2024-12-09 18:11:33.668831] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:25.821 [2024-12-09 18:11:33.668840] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:25.821 [2024-12-09 18:11:33.679104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:25.821 qpair failed and we were unable to recover it. 00:23:25.821 [2024-12-09 18:11:33.688763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:25.821 [2024-12-09 18:11:33.688798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:25.821 [2024-12-09 18:11:33.688816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:25.821 [2024-12-09 18:11:33.688826] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:25.821 [2024-12-09 18:11:33.688834] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:25.821 [2024-12-09 18:11:33.699219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:25.821 qpair failed and we were unable to recover it. 00:23:25.821 [2024-12-09 18:11:33.708840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:25.821 [2024-12-09 18:11:33.708877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:25.821 [2024-12-09 18:11:33.708897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:25.821 [2024-12-09 18:11:33.708907] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:25.821 [2024-12-09 18:11:33.708916] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:25.821 [2024-12-09 18:11:33.719321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:25.821 qpair failed and we were unable to recover it. 00:23:25.821 [2024-12-09 18:11:33.728875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:25.821 [2024-12-09 18:11:33.728917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:25.821 [2024-12-09 18:11:33.728936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:25.821 [2024-12-09 18:11:33.728945] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:25.821 [2024-12-09 18:11:33.728959] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:25.821 [2024-12-09 18:11:33.739231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:25.821 qpair failed and we were unable to recover it. 00:23:25.821 [2024-12-09 18:11:33.749028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:25.821 [2024-12-09 18:11:33.749071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:25.821 [2024-12-09 18:11:33.749088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:25.821 [2024-12-09 18:11:33.749098] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:25.821 [2024-12-09 18:11:33.749107] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:25.821 [2024-12-09 18:11:33.759333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:25.821 qpair failed and we were unable to recover it. 00:23:25.821 [2024-12-09 18:11:33.768982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:25.821 [2024-12-09 18:11:33.769024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:25.821 [2024-12-09 18:11:33.769041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:25.821 [2024-12-09 18:11:33.769051] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:25.821 [2024-12-09 18:11:33.769059] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:25.821 [2024-12-09 18:11:33.779439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:25.821 qpair failed and we were unable to recover it. 00:23:25.821 [2024-12-09 18:11:33.789074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:25.821 [2024-12-09 18:11:33.789118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:25.821 [2024-12-09 18:11:33.789135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:25.821 [2024-12-09 18:11:33.789144] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:25.821 [2024-12-09 18:11:33.789153] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.080 [2024-12-09 18:11:33.799641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.080 qpair failed and we were unable to recover it. 00:23:26.080 [2024-12-09 18:11:33.809232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.080 [2024-12-09 18:11:33.809274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.080 [2024-12-09 18:11:33.809295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.080 [2024-12-09 18:11:33.809305] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.080 [2024-12-09 18:11:33.809313] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.080 [2024-12-09 18:11:33.819590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.080 qpair failed and we were unable to recover it. 00:23:26.080 [2024-12-09 18:11:33.829276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.080 [2024-12-09 18:11:33.829315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.080 [2024-12-09 18:11:33.829333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.080 [2024-12-09 18:11:33.829343] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.080 [2024-12-09 18:11:33.829351] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.080 [2024-12-09 18:11:33.839508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.080 qpair failed and we were unable to recover it. 00:23:26.080 [2024-12-09 18:11:33.849406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.080 [2024-12-09 18:11:33.849444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.080 [2024-12-09 18:11:33.849462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.080 [2024-12-09 18:11:33.849472] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.080 [2024-12-09 18:11:33.849480] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.080 [2024-12-09 18:11:33.859567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.080 qpair failed and we were unable to recover it. 00:23:26.080 [2024-12-09 18:11:33.869375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.080 [2024-12-09 18:11:33.869415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.080 [2024-12-09 18:11:33.869432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.080 [2024-12-09 18:11:33.869441] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.080 [2024-12-09 18:11:33.869450] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.080 [2024-12-09 18:11:33.879674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.080 qpair failed and we were unable to recover it. 00:23:26.080 [2024-12-09 18:11:33.889399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.080 [2024-12-09 18:11:33.889439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.080 [2024-12-09 18:11:33.889457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.080 [2024-12-09 18:11:33.889470] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.080 [2024-12-09 18:11:33.889478] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.080 [2024-12-09 18:11:33.899574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.080 qpair failed and we were unable to recover it. 00:23:26.080 [2024-12-09 18:11:33.909455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.080 [2024-12-09 18:11:33.909491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.080 [2024-12-09 18:11:33.909509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.080 [2024-12-09 18:11:33.909518] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.080 [2024-12-09 18:11:33.909527] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.080 [2024-12-09 18:11:33.919797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.080 qpair failed and we were unable to recover it. 00:23:26.080 [2024-12-09 18:11:33.929497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.080 [2024-12-09 18:11:33.929540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.080 [2024-12-09 18:11:33.929557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.080 [2024-12-09 18:11:33.929567] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.080 [2024-12-09 18:11:33.929576] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.080 [2024-12-09 18:11:33.939808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.080 qpair failed and we were unable to recover it. 00:23:26.080 [2024-12-09 18:11:33.949616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.080 [2024-12-09 18:11:33.949658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.080 [2024-12-09 18:11:33.949675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.080 [2024-12-09 18:11:33.949685] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.080 [2024-12-09 18:11:33.949693] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.080 [2024-12-09 18:11:33.959967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.080 qpair failed and we were unable to recover it. 00:23:26.080 [2024-12-09 18:11:33.969637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.080 [2024-12-09 18:11:33.969679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.080 [2024-12-09 18:11:33.969697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.080 [2024-12-09 18:11:33.969706] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.080 [2024-12-09 18:11:33.969715] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.080 [2024-12-09 18:11:33.979854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.080 qpair failed and we were unable to recover it. 00:23:26.080 [2024-12-09 18:11:33.989714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.080 [2024-12-09 18:11:33.989752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.080 [2024-12-09 18:11:33.989770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.080 [2024-12-09 18:11:33.989779] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.080 [2024-12-09 18:11:33.989788] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.080 [2024-12-09 18:11:34.000065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.080 qpair failed and we were unable to recover it. 00:23:26.080 [2024-12-09 18:11:34.009677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.080 [2024-12-09 18:11:34.009712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.080 [2024-12-09 18:11:34.009729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.080 [2024-12-09 18:11:34.009739] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.080 [2024-12-09 18:11:34.009748] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.080 [2024-12-09 18:11:34.020048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.080 qpair failed and we were unable to recover it. 00:23:26.080 [2024-12-09 18:11:34.029854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.080 [2024-12-09 18:11:34.029894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.080 [2024-12-09 18:11:34.029911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.080 [2024-12-09 18:11:34.029921] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.080 [2024-12-09 18:11:34.029929] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.080 [2024-12-09 18:11:34.040209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.080 qpair failed and we were unable to recover it. 00:23:26.080 [2024-12-09 18:11:34.049951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.080 [2024-12-09 18:11:34.049993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.080 [2024-12-09 18:11:34.050010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.080 [2024-12-09 18:11:34.050020] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.080 [2024-12-09 18:11:34.050028] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.339 [2024-12-09 18:11:34.060278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.339 qpair failed and we were unable to recover it. 00:23:26.339 [2024-12-09 18:11:34.070031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.339 [2024-12-09 18:11:34.070076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.339 [2024-12-09 18:11:34.070093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.339 [2024-12-09 18:11:34.070102] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.339 [2024-12-09 18:11:34.070111] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.339 [2024-12-09 18:11:34.080409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.339 qpair failed and we were unable to recover it. 00:23:26.339 [2024-12-09 18:11:34.089971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.339 [2024-12-09 18:11:34.090006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.339 [2024-12-09 18:11:34.090024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.339 [2024-12-09 18:11:34.090033] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.339 [2024-12-09 18:11:34.090041] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.339 [2024-12-09 18:11:34.100346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.339 qpair failed and we were unable to recover it. 00:23:26.339 [2024-12-09 18:11:34.110009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.339 [2024-12-09 18:11:34.110052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.339 [2024-12-09 18:11:34.110070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.339 [2024-12-09 18:11:34.110079] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.339 [2024-12-09 18:11:34.110088] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.339 [2024-12-09 18:11:34.120565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.339 qpair failed and we were unable to recover it. 00:23:26.339 [2024-12-09 18:11:34.130113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.339 [2024-12-09 18:11:34.130152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.339 [2024-12-09 18:11:34.130170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.339 [2024-12-09 18:11:34.130179] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.339 [2024-12-09 18:11:34.130188] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.339 [2024-12-09 18:11:34.140405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.339 qpair failed and we were unable to recover it. 00:23:26.339 [2024-12-09 18:11:34.150326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.339 [2024-12-09 18:11:34.150365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.339 [2024-12-09 18:11:34.150386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.339 [2024-12-09 18:11:34.150395] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.339 [2024-12-09 18:11:34.150404] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.339 [2024-12-09 18:11:34.160570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.339 qpair failed and we were unable to recover it. 00:23:26.339 [2024-12-09 18:11:34.170242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.339 [2024-12-09 18:11:34.170280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.339 [2024-12-09 18:11:34.170298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.339 [2024-12-09 18:11:34.170307] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.339 [2024-12-09 18:11:34.170316] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.339 [2024-12-09 18:11:34.180426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.339 qpair failed and we were unable to recover it. 00:23:26.339 [2024-12-09 18:11:34.190340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.339 [2024-12-09 18:11:34.190379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.339 [2024-12-09 18:11:34.190396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.339 [2024-12-09 18:11:34.190406] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.339 [2024-12-09 18:11:34.190414] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.339 [2024-12-09 18:11:34.200450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.339 qpair failed and we were unable to recover it. 00:23:26.339 [2024-12-09 18:11:34.210416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.339 [2024-12-09 18:11:34.210459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.339 [2024-12-09 18:11:34.210476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.339 [2024-12-09 18:11:34.210485] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.339 [2024-12-09 18:11:34.210494] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.339 [2024-12-09 18:11:34.220736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.339 qpair failed and we were unable to recover it. 00:23:26.339 [2024-12-09 18:11:34.230505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.339 [2024-12-09 18:11:34.230552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.339 [2024-12-09 18:11:34.230570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.339 [2024-12-09 18:11:34.230580] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.339 [2024-12-09 18:11:34.230592] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.339 [2024-12-09 18:11:34.240590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.339 qpair failed and we were unable to recover it. 00:23:26.339 [2024-12-09 18:11:34.250520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.339 [2024-12-09 18:11:34.250558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.339 [2024-12-09 18:11:34.250576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.339 [2024-12-09 18:11:34.250585] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.339 [2024-12-09 18:11:34.250594] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.339 [2024-12-09 18:11:34.260655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.339 qpair failed and we were unable to recover it. 00:23:26.339 [2024-12-09 18:11:34.270583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.339 [2024-12-09 18:11:34.270623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.339 [2024-12-09 18:11:34.270641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.339 [2024-12-09 18:11:34.270650] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.339 [2024-12-09 18:11:34.270659] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.339 [2024-12-09 18:11:34.280882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.339 qpair failed and we were unable to recover it. 00:23:26.339 [2024-12-09 18:11:34.290671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.339 [2024-12-09 18:11:34.290712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.339 [2024-12-09 18:11:34.290729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.339 [2024-12-09 18:11:34.290739] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.339 [2024-12-09 18:11:34.290748] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.339 [2024-12-09 18:11:34.300859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.339 qpair failed and we were unable to recover it. 00:23:26.339 [2024-12-09 18:11:34.310650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.339 [2024-12-09 18:11:34.310689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.339 [2024-12-09 18:11:34.310707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.340 [2024-12-09 18:11:34.310717] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.340 [2024-12-09 18:11:34.310725] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.598 [2024-12-09 18:11:34.321016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.598 qpair failed and we were unable to recover it. 00:23:26.598 [2024-12-09 18:11:34.330724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.598 [2024-12-09 18:11:34.330766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.598 [2024-12-09 18:11:34.330784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.598 [2024-12-09 18:11:34.330794] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.598 [2024-12-09 18:11:34.330802] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.598 [2024-12-09 18:11:34.340888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.598 qpair failed and we were unable to recover it. 00:23:26.598 [2024-12-09 18:11:34.350804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.598 [2024-12-09 18:11:34.350850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.598 [2024-12-09 18:11:34.350867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.598 [2024-12-09 18:11:34.350877] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.598 [2024-12-09 18:11:34.350885] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.598 [2024-12-09 18:11:34.361164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.598 qpair failed and we were unable to recover it. 00:23:26.598 [2024-12-09 18:11:34.370807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.598 [2024-12-09 18:11:34.370846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.598 [2024-12-09 18:11:34.370864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.598 [2024-12-09 18:11:34.370873] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.598 [2024-12-09 18:11:34.370881] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.598 [2024-12-09 18:11:34.381038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.598 qpair failed and we were unable to recover it. 00:23:26.598 [2024-12-09 18:11:34.390818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.598 [2024-12-09 18:11:34.390868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.598 [2024-12-09 18:11:34.390886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.598 [2024-12-09 18:11:34.390895] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.598 [2024-12-09 18:11:34.390904] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.598 [2024-12-09 18:11:34.401164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.598 qpair failed and we were unable to recover it. 00:23:26.598 [2024-12-09 18:11:34.410847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.598 [2024-12-09 18:11:34.410894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.598 [2024-12-09 18:11:34.410912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.598 [2024-12-09 18:11:34.410921] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.598 [2024-12-09 18:11:34.410930] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.598 [2024-12-09 18:11:34.421170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.598 qpair failed and we were unable to recover it. 00:23:26.598 [2024-12-09 18:11:34.430987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.598 [2024-12-09 18:11:34.431028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.598 [2024-12-09 18:11:34.431045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.598 [2024-12-09 18:11:34.431054] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.598 [2024-12-09 18:11:34.431062] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.598 [2024-12-09 18:11:34.441334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.598 qpair failed and we were unable to recover it. 00:23:26.598 [2024-12-09 18:11:34.451116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.598 [2024-12-09 18:11:34.451158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.598 [2024-12-09 18:11:34.451175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.598 [2024-12-09 18:11:34.451185] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.598 [2024-12-09 18:11:34.451193] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.598 [2024-12-09 18:11:34.461134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.598 qpair failed and we were unable to recover it. 00:23:26.598 [2024-12-09 18:11:34.471100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.598 [2024-12-09 18:11:34.471142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.598 [2024-12-09 18:11:34.471159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.598 [2024-12-09 18:11:34.471168] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.598 [2024-12-09 18:11:34.471177] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.598 [2024-12-09 18:11:34.481515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.598 qpair failed and we were unable to recover it. 00:23:26.598 [2024-12-09 18:11:34.491145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.598 [2024-12-09 18:11:34.491190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.598 [2024-12-09 18:11:34.491211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.598 [2024-12-09 18:11:34.491220] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.598 [2024-12-09 18:11:34.491229] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.598 [2024-12-09 18:11:34.501332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.598 qpair failed and we were unable to recover it. 00:23:26.598 [2024-12-09 18:11:34.511220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.599 [2024-12-09 18:11:34.511261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.599 [2024-12-09 18:11:34.511279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.599 [2024-12-09 18:11:34.511288] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.599 [2024-12-09 18:11:34.511296] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.599 [2024-12-09 18:11:34.521552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.599 qpair failed and we were unable to recover it. 00:23:26.599 [2024-12-09 18:11:34.531225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.599 [2024-12-09 18:11:34.531265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.599 [2024-12-09 18:11:34.531283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.599 [2024-12-09 18:11:34.531292] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.599 [2024-12-09 18:11:34.531301] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.599 [2024-12-09 18:11:34.541428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.599 qpair failed and we were unable to recover it. 00:23:26.599 [2024-12-09 18:11:34.551469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.599 [2024-12-09 18:11:34.551509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.599 [2024-12-09 18:11:34.551527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.599 [2024-12-09 18:11:34.551536] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.599 [2024-12-09 18:11:34.551545] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.599 [2024-12-09 18:11:34.561530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.599 qpair failed and we were unable to recover it. 00:23:26.599 [2024-12-09 18:11:34.571319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.599 [2024-12-09 18:11:34.571362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.599 [2024-12-09 18:11:34.571379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.599 [2024-12-09 18:11:34.571389] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.599 [2024-12-09 18:11:34.571401] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.857 [2024-12-09 18:11:34.581510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.857 qpair failed and we were unable to recover it. 00:23:26.857 [2024-12-09 18:11:34.591330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.857 [2024-12-09 18:11:34.591367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.857 [2024-12-09 18:11:34.591384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.857 [2024-12-09 18:11:34.591394] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.857 [2024-12-09 18:11:34.591402] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.857 [2024-12-09 18:11:34.601607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.857 qpair failed and we were unable to recover it. 00:23:26.857 [2024-12-09 18:11:34.611442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.857 [2024-12-09 18:11:34.611482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.857 [2024-12-09 18:11:34.611500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.857 [2024-12-09 18:11:34.611510] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.857 [2024-12-09 18:11:34.611518] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.857 [2024-12-09 18:11:34.621607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.857 qpair failed and we were unable to recover it. 00:23:26.857 [2024-12-09 18:11:34.631455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.858 [2024-12-09 18:11:34.631495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.858 [2024-12-09 18:11:34.631512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.858 [2024-12-09 18:11:34.631522] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.858 [2024-12-09 18:11:34.631530] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.858 [2024-12-09 18:11:34.641745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.858 qpair failed and we were unable to recover it. 00:23:26.858 [2024-12-09 18:11:34.651500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.858 [2024-12-09 18:11:34.651538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.858 [2024-12-09 18:11:34.651555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.858 [2024-12-09 18:11:34.651564] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.858 [2024-12-09 18:11:34.651573] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.858 [2024-12-09 18:11:34.661807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.858 qpair failed and we were unable to recover it. 00:23:26.858 [2024-12-09 18:11:34.671596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.858 [2024-12-09 18:11:34.671635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.858 [2024-12-09 18:11:34.671652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.858 [2024-12-09 18:11:34.671662] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.858 [2024-12-09 18:11:34.671670] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.858 [2024-12-09 18:11:34.681778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.858 qpair failed and we were unable to recover it. 00:23:26.858 [2024-12-09 18:11:34.691667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.858 [2024-12-09 18:11:34.691707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.858 [2024-12-09 18:11:34.691724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.858 [2024-12-09 18:11:34.691734] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.858 [2024-12-09 18:11:34.691743] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.858 [2024-12-09 18:11:34.701722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.858 qpair failed and we were unable to recover it. 00:23:26.858 [2024-12-09 18:11:34.711621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.858 [2024-12-09 18:11:34.711663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.858 [2024-12-09 18:11:34.711681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.858 [2024-12-09 18:11:34.711690] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.858 [2024-12-09 18:11:34.711698] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.858 [2024-12-09 18:11:34.721846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.858 qpair failed and we were unable to recover it. 00:23:26.858 [2024-12-09 18:11:34.731727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.858 [2024-12-09 18:11:34.731770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.858 [2024-12-09 18:11:34.731788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.858 [2024-12-09 18:11:34.731797] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.858 [2024-12-09 18:11:34.731806] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.858 [2024-12-09 18:11:34.741964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.858 qpair failed and we were unable to recover it. 00:23:26.858 [2024-12-09 18:11:34.751756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.858 [2024-12-09 18:11:34.751795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.858 [2024-12-09 18:11:34.751812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.858 [2024-12-09 18:11:34.751821] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.858 [2024-12-09 18:11:34.751830] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.858 [2024-12-09 18:11:34.762074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.858 qpair failed and we were unable to recover it. 00:23:26.858 [2024-12-09 18:11:34.771835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.858 [2024-12-09 18:11:34.771876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.858 [2024-12-09 18:11:34.771893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.858 [2024-12-09 18:11:34.771903] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.858 [2024-12-09 18:11:34.771911] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.858 [2024-12-09 18:11:34.782082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.858 qpair failed and we were unable to recover it. 00:23:26.858 [2024-12-09 18:11:34.791877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.858 [2024-12-09 18:11:34.791921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.858 [2024-12-09 18:11:34.791939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.858 [2024-12-09 18:11:34.791953] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.858 [2024-12-09 18:11:34.791962] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.858 [2024-12-09 18:11:34.802277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.858 qpair failed and we were unable to recover it. 00:23:26.858 [2024-12-09 18:11:34.812080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.858 [2024-12-09 18:11:34.812124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.858 [2024-12-09 18:11:34.812142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.858 [2024-12-09 18:11:34.812151] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.858 [2024-12-09 18:11:34.812160] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:26.858 [2024-12-09 18:11:34.822111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:26.858 qpair failed and we were unable to recover it. 00:23:26.858 [2024-12-09 18:11:34.832041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:26.858 [2024-12-09 18:11:34.832078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:26.858 [2024-12-09 18:11:34.832096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:26.858 [2024-12-09 18:11:34.832108] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:26.858 [2024-12-09 18:11:34.832117] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.116 [2024-12-09 18:11:34.842882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.116 qpair failed and we were unable to recover it. 00:23:27.116 [2024-12-09 18:11:34.852116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.116 [2024-12-09 18:11:34.852160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.116 [2024-12-09 18:11:34.852177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.116 [2024-12-09 18:11:34.852186] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.116 [2024-12-09 18:11:34.852194] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.116 [2024-12-09 18:11:34.862553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.116 qpair failed and we were unable to recover it. 00:23:27.116 [2024-12-09 18:11:34.872231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.116 [2024-12-09 18:11:34.872271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.116 [2024-12-09 18:11:34.872289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.116 [2024-12-09 18:11:34.872298] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.116 [2024-12-09 18:11:34.872307] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.116 [2024-12-09 18:11:34.882325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.116 qpair failed and we were unable to recover it. 00:23:27.116 [2024-12-09 18:11:34.892202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.116 [2024-12-09 18:11:34.892240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.116 [2024-12-09 18:11:34.892257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.116 [2024-12-09 18:11:34.892266] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.116 [2024-12-09 18:11:34.892275] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.116 [2024-12-09 18:11:34.902497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.116 qpair failed and we were unable to recover it. 00:23:27.116 [2024-12-09 18:11:34.912253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.116 [2024-12-09 18:11:34.912297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.116 [2024-12-09 18:11:34.912315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.116 [2024-12-09 18:11:34.912324] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.116 [2024-12-09 18:11:34.912336] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.116 [2024-12-09 18:11:34.922519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.116 qpair failed and we were unable to recover it. 00:23:27.116 [2024-12-09 18:11:34.932267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.116 [2024-12-09 18:11:34.932308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.116 [2024-12-09 18:11:34.932327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.116 [2024-12-09 18:11:34.932336] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.116 [2024-12-09 18:11:34.932345] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.116 [2024-12-09 18:11:34.942480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.116 qpair failed and we were unable to recover it. 00:23:27.116 [2024-12-09 18:11:34.952337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.116 [2024-12-09 18:11:34.952379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.116 [2024-12-09 18:11:34.952398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.116 [2024-12-09 18:11:34.952407] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.116 [2024-12-09 18:11:34.952416] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.117 [2024-12-09 18:11:34.962606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.117 qpair failed and we were unable to recover it. 00:23:27.117 [2024-12-09 18:11:34.972461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.117 [2024-12-09 18:11:34.972502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.117 [2024-12-09 18:11:34.972520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.117 [2024-12-09 18:11:34.972529] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.117 [2024-12-09 18:11:34.972538] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.117 [2024-12-09 18:11:34.982565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.117 qpair failed and we were unable to recover it. 00:23:27.117 [2024-12-09 18:11:34.992491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.117 [2024-12-09 18:11:34.992533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.117 [2024-12-09 18:11:34.992550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.117 [2024-12-09 18:11:34.992559] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.117 [2024-12-09 18:11:34.992568] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.117 [2024-12-09 18:11:35.002896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.117 qpair failed and we were unable to recover it. 00:23:27.117 [2024-12-09 18:11:35.012585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.117 [2024-12-09 18:11:35.012625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.117 [2024-12-09 18:11:35.012642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.117 [2024-12-09 18:11:35.012652] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.117 [2024-12-09 18:11:35.012660] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.117 [2024-12-09 18:11:35.022832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.117 qpair failed and we were unable to recover it. 00:23:27.117 [2024-12-09 18:11:35.032577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.117 [2024-12-09 18:11:35.032616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.117 [2024-12-09 18:11:35.032634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.117 [2024-12-09 18:11:35.032643] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.117 [2024-12-09 18:11:35.032652] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.117 [2024-12-09 18:11:35.042892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.117 qpair failed and we were unable to recover it. 00:23:27.117 [2024-12-09 18:11:35.052716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.117 [2024-12-09 18:11:35.052760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.117 [2024-12-09 18:11:35.052777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.117 [2024-12-09 18:11:35.052786] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.117 [2024-12-09 18:11:35.052795] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.117 [2024-12-09 18:11:35.062994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.117 qpair failed and we were unable to recover it. 00:23:27.117 [2024-12-09 18:11:35.072724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.117 [2024-12-09 18:11:35.072764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.117 [2024-12-09 18:11:35.072781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.117 [2024-12-09 18:11:35.072791] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.117 [2024-12-09 18:11:35.072799] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.117 [2024-12-09 18:11:35.082979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.117 qpair failed and we were unable to recover it. 00:23:27.117 [2024-12-09 18:11:35.092773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.117 [2024-12-09 18:11:35.092813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.117 [2024-12-09 18:11:35.092833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.117 [2024-12-09 18:11:35.092843] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.117 [2024-12-09 18:11:35.092851] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.375 [2024-12-09 18:11:35.102860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.375 qpair failed and we were unable to recover it. 00:23:27.375 [2024-12-09 18:11:35.112972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.375 [2024-12-09 18:11:35.113017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.375 [2024-12-09 18:11:35.113035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.375 [2024-12-09 18:11:35.113044] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.375 [2024-12-09 18:11:35.113053] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.375 [2024-12-09 18:11:35.123018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.375 qpair failed and we were unable to recover it. 00:23:27.375 [2024-12-09 18:11:35.132890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.375 [2024-12-09 18:11:35.132933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.375 [2024-12-09 18:11:35.132957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.375 [2024-12-09 18:11:35.132967] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.375 [2024-12-09 18:11:35.132975] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.375 [2024-12-09 18:11:35.143145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.375 qpair failed and we were unable to recover it. 00:23:27.375 [2024-12-09 18:11:35.153050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.375 [2024-12-09 18:11:35.153094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.375 [2024-12-09 18:11:35.153110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.375 [2024-12-09 18:11:35.153120] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.375 [2024-12-09 18:11:35.153128] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.375 [2024-12-09 18:11:35.163217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.375 qpair failed and we were unable to recover it. 00:23:27.375 [2024-12-09 18:11:35.173101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.375 [2024-12-09 18:11:35.173141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.375 [2024-12-09 18:11:35.173159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.375 [2024-12-09 18:11:35.173171] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.375 [2024-12-09 18:11:35.173180] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.375 [2024-12-09 18:11:35.183342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.375 qpair failed and we were unable to recover it. 00:23:27.375 [2024-12-09 18:11:35.193279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.375 [2024-12-09 18:11:35.193322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.375 [2024-12-09 18:11:35.193340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.375 [2024-12-09 18:11:35.193349] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.375 [2024-12-09 18:11:35.193358] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.375 [2024-12-09 18:11:35.203430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.375 qpair failed and we were unable to recover it. 00:23:27.375 [2024-12-09 18:11:35.213192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.375 [2024-12-09 18:11:35.213233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.375 [2024-12-09 18:11:35.213250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.375 [2024-12-09 18:11:35.213259] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.375 [2024-12-09 18:11:35.213268] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.375 [2024-12-09 18:11:35.223454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.375 qpair failed and we were unable to recover it. 00:23:27.375 [2024-12-09 18:11:35.233265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.375 [2024-12-09 18:11:35.233307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.375 [2024-12-09 18:11:35.233324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.375 [2024-12-09 18:11:35.233334] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.375 [2024-12-09 18:11:35.233342] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.375 [2024-12-09 18:11:35.243471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.375 qpair failed and we were unable to recover it. 00:23:27.375 [2024-12-09 18:11:35.253317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.375 [2024-12-09 18:11:35.253358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.375 [2024-12-09 18:11:35.253376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.375 [2024-12-09 18:11:35.253385] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.375 [2024-12-09 18:11:35.253394] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.375 [2024-12-09 18:11:35.263569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.375 qpair failed and we were unable to recover it. 00:23:27.375 [2024-12-09 18:11:35.273338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.375 [2024-12-09 18:11:35.273378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.375 [2024-12-09 18:11:35.273396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.375 [2024-12-09 18:11:35.273405] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.375 [2024-12-09 18:11:35.273413] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.375 [2024-12-09 18:11:35.283645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.375 qpair failed and we were unable to recover it. 00:23:27.375 [2024-12-09 18:11:35.293475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.375 [2024-12-09 18:11:35.293518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.375 [2024-12-09 18:11:35.293536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.375 [2024-12-09 18:11:35.293545] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.375 [2024-12-09 18:11:35.293553] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.375 [2024-12-09 18:11:35.303785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.375 qpair failed and we were unable to recover it. 00:23:27.375 [2024-12-09 18:11:35.313535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.375 [2024-12-09 18:11:35.313578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.375 [2024-12-09 18:11:35.313596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.375 [2024-12-09 18:11:35.313605] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.375 [2024-12-09 18:11:35.313614] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.375 [2024-12-09 18:11:35.323857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.375 qpair failed and we were unable to recover it. 00:23:27.375 [2024-12-09 18:11:35.333506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.375 [2024-12-09 18:11:35.333544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.375 [2024-12-09 18:11:35.333562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.375 [2024-12-09 18:11:35.333571] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.375 [2024-12-09 18:11:35.333580] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.375 [2024-12-09 18:11:35.343745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.375 qpair failed and we were unable to recover it. 00:23:27.633 [2024-12-09 18:11:35.353678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.633 [2024-12-09 18:11:35.353722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.633 [2024-12-09 18:11:35.353740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.633 [2024-12-09 18:11:35.353751] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.633 [2024-12-09 18:11:35.353760] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.633 [2024-12-09 18:11:35.363893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.633 qpair failed and we were unable to recover it. 00:23:27.633 [2024-12-09 18:11:35.373619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.633 [2024-12-09 18:11:35.373663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.633 [2024-12-09 18:11:35.373681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.633 [2024-12-09 18:11:35.373690] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.633 [2024-12-09 18:11:35.373699] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.633 [2024-12-09 18:11:35.383861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.633 qpair failed and we were unable to recover it. 00:23:27.633 [2024-12-09 18:11:35.393831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.633 [2024-12-09 18:11:35.393871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.633 [2024-12-09 18:11:35.393889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.633 [2024-12-09 18:11:35.393899] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.633 [2024-12-09 18:11:35.393907] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.633 [2024-12-09 18:11:35.404013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.633 qpair failed and we were unable to recover it. 00:23:27.633 [2024-12-09 18:11:35.413810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.633 [2024-12-09 18:11:35.413850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.633 [2024-12-09 18:11:35.413868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.633 [2024-12-09 18:11:35.413877] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.633 [2024-12-09 18:11:35.413886] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.633 [2024-12-09 18:11:35.424088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.633 qpair failed and we were unable to recover it. 00:23:27.633 [2024-12-09 18:11:35.433837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.633 [2024-12-09 18:11:35.433880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.633 [2024-12-09 18:11:35.433901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.633 [2024-12-09 18:11:35.433910] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.633 [2024-12-09 18:11:35.433918] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.634 [2024-12-09 18:11:35.444338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.634 qpair failed and we were unable to recover it. 00:23:27.634 [2024-12-09 18:11:35.453871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.634 [2024-12-09 18:11:35.453913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.634 [2024-12-09 18:11:35.453931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.634 [2024-12-09 18:11:35.453940] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.634 [2024-12-09 18:11:35.453955] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.634 [2024-12-09 18:11:35.464176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.634 qpair failed and we were unable to recover it. 00:23:27.634 [2024-12-09 18:11:35.473908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.634 [2024-12-09 18:11:35.473963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.634 [2024-12-09 18:11:35.473981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.634 [2024-12-09 18:11:35.473991] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.634 [2024-12-09 18:11:35.473999] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.634 [2024-12-09 18:11:35.484689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.634 qpair failed and we were unable to recover it. 00:23:27.634 [2024-12-09 18:11:35.493975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.634 [2024-12-09 18:11:35.494019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.634 [2024-12-09 18:11:35.494037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.634 [2024-12-09 18:11:35.494046] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.634 [2024-12-09 18:11:35.494054] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.634 [2024-12-09 18:11:35.504326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.634 qpair failed and we were unable to recover it. 00:23:27.634 [2024-12-09 18:11:35.514208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.634 [2024-12-09 18:11:35.514248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.634 [2024-12-09 18:11:35.514266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.634 [2024-12-09 18:11:35.514278] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.634 [2024-12-09 18:11:35.514287] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.634 [2024-12-09 18:11:35.524489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.634 qpair failed and we were unable to recover it. 00:23:27.634 [2024-12-09 18:11:35.534093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.634 [2024-12-09 18:11:35.534134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.634 [2024-12-09 18:11:35.534152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.634 [2024-12-09 18:11:35.534161] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.634 [2024-12-09 18:11:35.534170] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.634 [2024-12-09 18:11:35.544481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.634 qpair failed and we were unable to recover it. 00:23:27.634 [2024-12-09 18:11:35.554252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.634 [2024-12-09 18:11:35.554286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.634 [2024-12-09 18:11:35.554303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.634 [2024-12-09 18:11:35.554312] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.634 [2024-12-09 18:11:35.554321] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.634 [2024-12-09 18:11:35.564663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.634 qpair failed and we were unable to recover it. 00:23:27.634 [2024-12-09 18:11:35.574270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.634 [2024-12-09 18:11:35.574312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.634 [2024-12-09 18:11:35.574330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.634 [2024-12-09 18:11:35.574339] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.634 [2024-12-09 18:11:35.574348] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.634 [2024-12-09 18:11:35.584602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.634 qpair failed and we were unable to recover it. 00:23:27.634 [2024-12-09 18:11:35.594330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.634 [2024-12-09 18:11:35.594373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.634 [2024-12-09 18:11:35.594390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.634 [2024-12-09 18:11:35.594399] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.634 [2024-12-09 18:11:35.594408] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.634 [2024-12-09 18:11:35.604678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.634 qpair failed and we were unable to recover it. 00:23:27.893 [2024-12-09 18:11:35.614428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.893 [2024-12-09 18:11:35.614468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.893 [2024-12-09 18:11:35.614485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.893 [2024-12-09 18:11:35.614495] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.893 [2024-12-09 18:11:35.614503] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.893 [2024-12-09 18:11:35.624797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.893 qpair failed and we were unable to recover it. 00:23:27.893 [2024-12-09 18:11:35.634557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.893 [2024-12-09 18:11:35.634598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.893 [2024-12-09 18:11:35.634615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.893 [2024-12-09 18:11:35.634624] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.893 [2024-12-09 18:11:35.634633] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.893 [2024-12-09 18:11:35.644817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.893 qpair failed and we were unable to recover it. 00:23:27.893 [2024-12-09 18:11:35.654666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.893 [2024-12-09 18:11:35.654709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.893 [2024-12-09 18:11:35.654727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.893 [2024-12-09 18:11:35.654736] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.893 [2024-12-09 18:11:35.654744] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.893 [2024-12-09 18:11:35.664836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.893 qpair failed and we were unable to recover it. 00:23:27.893 [2024-12-09 18:11:35.674681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.893 [2024-12-09 18:11:35.674721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.893 [2024-12-09 18:11:35.674738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.893 [2024-12-09 18:11:35.674748] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.893 [2024-12-09 18:11:35.674756] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.893 [2024-12-09 18:11:35.684972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.893 qpair failed and we were unable to recover it. 00:23:27.893 [2024-12-09 18:11:35.694677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.893 [2024-12-09 18:11:35.694719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.893 [2024-12-09 18:11:35.694737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.893 [2024-12-09 18:11:35.694746] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.893 [2024-12-09 18:11:35.694754] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.893 [2024-12-09 18:11:35.705121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.893 qpair failed and we were unable to recover it. 00:23:27.893 [2024-12-09 18:11:35.714709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.893 [2024-12-09 18:11:35.714750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.893 [2024-12-09 18:11:35.714769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.893 [2024-12-09 18:11:35.714779] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.893 [2024-12-09 18:11:35.714787] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.893 [2024-12-09 18:11:35.725288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.893 qpair failed and we were unable to recover it. 00:23:27.893 [2024-12-09 18:11:35.734728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.893 [2024-12-09 18:11:35.734768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.893 [2024-12-09 18:11:35.734785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.893 [2024-12-09 18:11:35.734795] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.893 [2024-12-09 18:11:35.734803] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.893 [2024-12-09 18:11:35.745095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.893 qpair failed and we were unable to recover it. 00:23:27.893 [2024-12-09 18:11:35.754783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.893 [2024-12-09 18:11:35.754824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.893 [2024-12-09 18:11:35.754841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.893 [2024-12-09 18:11:35.754850] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.893 [2024-12-09 18:11:35.754858] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.893 [2024-12-09 18:11:35.765264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.893 qpair failed and we were unable to recover it. 00:23:27.893 [2024-12-09 18:11:35.774965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.893 [2024-12-09 18:11:35.775009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.893 [2024-12-09 18:11:35.775030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.893 [2024-12-09 18:11:35.775040] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.893 [2024-12-09 18:11:35.775048] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.893 [2024-12-09 18:11:35.785181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.893 qpair failed and we were unable to recover it. 00:23:27.893 [2024-12-09 18:11:35.794968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.893 [2024-12-09 18:11:35.795005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.893 [2024-12-09 18:11:35.795022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.893 [2024-12-09 18:11:35.795032] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.893 [2024-12-09 18:11:35.795040] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.893 [2024-12-09 18:11:35.805444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.893 qpair failed and we were unable to recover it. 00:23:27.893 [2024-12-09 18:11:35.815043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.893 [2024-12-09 18:11:35.815085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.893 [2024-12-09 18:11:35.815102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.893 [2024-12-09 18:11:35.815111] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.893 [2024-12-09 18:11:35.815120] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.893 [2024-12-09 18:11:35.825141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.893 qpair failed and we were unable to recover it. 00:23:27.893 [2024-12-09 18:11:35.835214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.894 [2024-12-09 18:11:35.835256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.894 [2024-12-09 18:11:35.835273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.894 [2024-12-09 18:11:35.835283] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.894 [2024-12-09 18:11:35.835291] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.894 [2024-12-09 18:11:35.845341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.894 qpair failed and we were unable to recover it. 00:23:27.894 [2024-12-09 18:11:35.855205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:27.894 [2024-12-09 18:11:35.855245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:27.894 [2024-12-09 18:11:35.855263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:27.894 [2024-12-09 18:11:35.855272] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:27.894 [2024-12-09 18:11:35.855283] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:27.894 [2024-12-09 18:11:35.865483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:27.894 qpair failed and we were unable to recover it. 00:23:28.152 [2024-12-09 18:11:35.875220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.152 [2024-12-09 18:11:35.875259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.152 [2024-12-09 18:11:35.875277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.152 [2024-12-09 18:11:35.875286] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.152 [2024-12-09 18:11:35.875294] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.152 [2024-12-09 18:11:35.885644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.153 qpair failed and we were unable to recover it. 00:23:28.153 [2024-12-09 18:11:35.895325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.153 [2024-12-09 18:11:35.895366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.153 [2024-12-09 18:11:35.895383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.153 [2024-12-09 18:11:35.895392] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.153 [2024-12-09 18:11:35.895401] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.153 [2024-12-09 18:11:35.905666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.153 qpair failed and we were unable to recover it. 00:23:28.153 [2024-12-09 18:11:35.915442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.153 [2024-12-09 18:11:35.915483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.153 [2024-12-09 18:11:35.915501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.153 [2024-12-09 18:11:35.915510] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.153 [2024-12-09 18:11:35.915519] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.153 [2024-12-09 18:11:35.925642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.153 qpair failed and we were unable to recover it. 00:23:28.153 [2024-12-09 18:11:35.935424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.153 [2024-12-09 18:11:35.935464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.153 [2024-12-09 18:11:35.935481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.153 [2024-12-09 18:11:35.935491] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.153 [2024-12-09 18:11:35.935499] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.153 [2024-12-09 18:11:35.945625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.153 qpair failed and we were unable to recover it. 00:23:28.153 [2024-12-09 18:11:35.955544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.153 [2024-12-09 18:11:35.955588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.153 [2024-12-09 18:11:35.955605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.153 [2024-12-09 18:11:35.955615] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.153 [2024-12-09 18:11:35.955623] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.153 [2024-12-09 18:11:35.965773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.153 qpair failed and we were unable to recover it. 00:23:28.153 [2024-12-09 18:11:35.975556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.153 [2024-12-09 18:11:35.975597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.153 [2024-12-09 18:11:35.975615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.153 [2024-12-09 18:11:35.975624] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.153 [2024-12-09 18:11:35.975633] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.153 [2024-12-09 18:11:35.985627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.153 qpair failed and we were unable to recover it. 00:23:28.153 [2024-12-09 18:11:35.995670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.153 [2024-12-09 18:11:35.995710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.153 [2024-12-09 18:11:35.995727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.153 [2024-12-09 18:11:35.995737] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.153 [2024-12-09 18:11:35.995745] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.153 [2024-12-09 18:11:36.005913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.153 qpair failed and we were unable to recover it. 00:23:28.153 [2024-12-09 18:11:36.015740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.153 [2024-12-09 18:11:36.015782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.153 [2024-12-09 18:11:36.015800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.153 [2024-12-09 18:11:36.015810] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.153 [2024-12-09 18:11:36.015818] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.153 [2024-12-09 18:11:36.025796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.153 qpair failed and we were unable to recover it. 00:23:28.153 [2024-12-09 18:11:36.035675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.153 [2024-12-09 18:11:36.035715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.153 [2024-12-09 18:11:36.035733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.153 [2024-12-09 18:11:36.035742] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.153 [2024-12-09 18:11:36.035751] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.153 [2024-12-09 18:11:36.046022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.153 qpair failed and we were unable to recover it. 00:23:28.153 [2024-12-09 18:11:36.055804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.153 [2024-12-09 18:11:36.055844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.153 [2024-12-09 18:11:36.055861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.153 [2024-12-09 18:11:36.055871] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.153 [2024-12-09 18:11:36.055879] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.153 [2024-12-09 18:11:36.065965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.153 qpair failed and we were unable to recover it. 00:23:28.153 [2024-12-09 18:11:36.075734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.153 [2024-12-09 18:11:36.075774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.153 [2024-12-09 18:11:36.075791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.153 [2024-12-09 18:11:36.075800] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.153 [2024-12-09 18:11:36.075809] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.153 [2024-12-09 18:11:36.086084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.153 qpair failed and we were unable to recover it. 00:23:28.153 [2024-12-09 18:11:36.095871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.153 [2024-12-09 18:11:36.095913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.153 [2024-12-09 18:11:36.095930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.153 [2024-12-09 18:11:36.095940] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.153 [2024-12-09 18:11:36.095962] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.153 [2024-12-09 18:11:36.105967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.153 qpair failed and we were unable to recover it. 00:23:28.153 [2024-12-09 18:11:36.115865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.153 [2024-12-09 18:11:36.115903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.153 [2024-12-09 18:11:36.115926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.153 [2024-12-09 18:11:36.115936] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.153 [2024-12-09 18:11:36.115945] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.153 [2024-12-09 18:11:36.126478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.153 qpair failed and we were unable to recover it. 00:23:28.412 [2024-12-09 18:11:36.135969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.412 [2024-12-09 18:11:36.136010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.412 [2024-12-09 18:11:36.136028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.412 [2024-12-09 18:11:36.136037] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.412 [2024-12-09 18:11:36.136046] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.412 [2024-12-09 18:11:36.146040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.412 qpair failed and we were unable to recover it. 00:23:28.413 [2024-12-09 18:11:36.156075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.413 [2024-12-09 18:11:36.156116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.413 [2024-12-09 18:11:36.156134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.413 [2024-12-09 18:11:36.156143] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.413 [2024-12-09 18:11:36.156151] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.413 [2024-12-09 18:11:36.166365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.413 qpair failed and we were unable to recover it. 00:23:28.413 [2024-12-09 18:11:36.176151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.413 [2024-12-09 18:11:36.176191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.413 [2024-12-09 18:11:36.176208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.413 [2024-12-09 18:11:36.176218] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.413 [2024-12-09 18:11:36.176226] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.413 [2024-12-09 18:11:36.186485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.413 qpair failed and we were unable to recover it. 00:23:28.413 [2024-12-09 18:11:36.196342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.413 [2024-12-09 18:11:36.196379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.413 [2024-12-09 18:11:36.196396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.413 [2024-12-09 18:11:36.196406] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.413 [2024-12-09 18:11:36.196417] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.413 [2024-12-09 18:11:36.206498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.413 qpair failed and we were unable to recover it. 00:23:28.413 [2024-12-09 18:11:36.216419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.413 [2024-12-09 18:11:36.216459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.413 [2024-12-09 18:11:36.216478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.413 [2024-12-09 18:11:36.216488] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.413 [2024-12-09 18:11:36.216496] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.413 [2024-12-09 18:11:36.226518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.413 qpair failed and we were unable to recover it. 00:23:28.413 [2024-12-09 18:11:36.236445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.413 [2024-12-09 18:11:36.236484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.413 [2024-12-09 18:11:36.236501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.413 [2024-12-09 18:11:36.236511] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.413 [2024-12-09 18:11:36.236519] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.413 [2024-12-09 18:11:36.246840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.413 qpair failed and we were unable to recover it. 00:23:28.413 [2024-12-09 18:11:36.256425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.413 [2024-12-09 18:11:36.256463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.413 [2024-12-09 18:11:36.256480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.413 [2024-12-09 18:11:36.256489] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.413 [2024-12-09 18:11:36.256498] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.413 [2024-12-09 18:11:36.266544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.413 qpair failed and we were unable to recover it. 00:23:28.413 [2024-12-09 18:11:36.276445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.413 [2024-12-09 18:11:36.276483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.413 [2024-12-09 18:11:36.276500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.413 [2024-12-09 18:11:36.276509] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.413 [2024-12-09 18:11:36.276517] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.413 [2024-12-09 18:11:36.286857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.413 qpair failed and we were unable to recover it. 00:23:28.413 [2024-12-09 18:11:36.296446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.413 [2024-12-09 18:11:36.296485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.413 [2024-12-09 18:11:36.296503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.413 [2024-12-09 18:11:36.296512] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.413 [2024-12-09 18:11:36.296520] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.413 [2024-12-09 18:11:36.306705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.413 qpair failed and we were unable to recover it. 00:23:28.413 [2024-12-09 18:11:36.316481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.413 [2024-12-09 18:11:36.316524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.413 [2024-12-09 18:11:36.316541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.413 [2024-12-09 18:11:36.316550] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.413 [2024-12-09 18:11:36.316559] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.413 [2024-12-09 18:11:36.326699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.413 qpair failed and we were unable to recover it. 00:23:28.413 [2024-12-09 18:11:36.336503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.413 [2024-12-09 18:11:36.336548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.413 [2024-12-09 18:11:36.336566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.413 [2024-12-09 18:11:36.336575] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.413 [2024-12-09 18:11:36.336584] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.413 [2024-12-09 18:11:36.346876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.413 qpair failed and we were unable to recover it. 00:23:28.413 [2024-12-09 18:11:36.356608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.413 [2024-12-09 18:11:36.356653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.413 [2024-12-09 18:11:36.356670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.413 [2024-12-09 18:11:36.356679] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.413 [2024-12-09 18:11:36.356688] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.413 [2024-12-09 18:11:36.367017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.413 qpair failed and we were unable to recover it. 00:23:28.413 [2024-12-09 18:11:36.376685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.413 [2024-12-09 18:11:36.376732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.413 [2024-12-09 18:11:36.376750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.413 [2024-12-09 18:11:36.376759] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.413 [2024-12-09 18:11:36.376767] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.413 [2024-12-09 18:11:36.386887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.413 qpair failed and we were unable to recover it. 00:23:28.672 [2024-12-09 18:11:36.396695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.672 [2024-12-09 18:11:36.396732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.672 [2024-12-09 18:11:36.396749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.672 [2024-12-09 18:11:36.396758] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.672 [2024-12-09 18:11:36.396767] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.672 [2024-12-09 18:11:36.407223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.672 qpair failed and we were unable to recover it. 00:23:28.672 [2024-12-09 18:11:36.416681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.672 [2024-12-09 18:11:36.416727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.672 [2024-12-09 18:11:36.416744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.672 [2024-12-09 18:11:36.416753] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.672 [2024-12-09 18:11:36.416762] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.672 [2024-12-09 18:11:36.427150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.672 qpair failed and we were unable to recover it. 00:23:28.672 [2024-12-09 18:11:36.436747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.672 [2024-12-09 18:11:36.436784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.672 [2024-12-09 18:11:36.436802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.672 [2024-12-09 18:11:36.436811] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.672 [2024-12-09 18:11:36.436819] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.672 [2024-12-09 18:11:36.447164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.672 qpair failed and we were unable to recover it. 00:23:28.672 [2024-12-09 18:11:36.456898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.672 [2024-12-09 18:11:36.456941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.672 [2024-12-09 18:11:36.456964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.673 [2024-12-09 18:11:36.456977] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.673 [2024-12-09 18:11:36.456986] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.673 [2024-12-09 18:11:36.467106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.673 qpair failed and we were unable to recover it. 00:23:28.673 [2024-12-09 18:11:36.476854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.673 [2024-12-09 18:11:36.476898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.673 [2024-12-09 18:11:36.476915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.673 [2024-12-09 18:11:36.476924] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.673 [2024-12-09 18:11:36.476933] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.673 [2024-12-09 18:11:36.487125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.673 qpair failed and we were unable to recover it. 00:23:28.673 [2024-12-09 18:11:36.496997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.673 [2024-12-09 18:11:36.497040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.673 [2024-12-09 18:11:36.497057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.673 [2024-12-09 18:11:36.497067] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.673 [2024-12-09 18:11:36.497076] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.673 [2024-12-09 18:11:36.507158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.673 qpair failed and we were unable to recover it. 00:23:28.673 [2024-12-09 18:11:36.517079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.673 [2024-12-09 18:11:36.517119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.673 [2024-12-09 18:11:36.517137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.673 [2024-12-09 18:11:36.517146] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.673 [2024-12-09 18:11:36.517155] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.673 [2024-12-09 18:11:36.527360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.673 qpair failed and we were unable to recover it. 00:23:28.673 [2024-12-09 18:11:36.537166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.673 [2024-12-09 18:11:36.537206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.673 [2024-12-09 18:11:36.537224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.673 [2024-12-09 18:11:36.537233] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.673 [2024-12-09 18:11:36.537245] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.673 [2024-12-09 18:11:36.547440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.673 qpair failed and we were unable to recover it. 00:23:28.673 [2024-12-09 18:11:36.557227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.673 [2024-12-09 18:11:36.557271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.673 [2024-12-09 18:11:36.557289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.673 [2024-12-09 18:11:36.557298] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.673 [2024-12-09 18:11:36.557307] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.673 [2024-12-09 18:11:36.567630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.673 qpair failed and we were unable to recover it. 00:23:28.673 [2024-12-09 18:11:36.577372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.673 [2024-12-09 18:11:36.577408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.673 [2024-12-09 18:11:36.577425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.673 [2024-12-09 18:11:36.577434] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.673 [2024-12-09 18:11:36.577443] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.673 [2024-12-09 18:11:36.587619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.673 qpair failed and we were unable to recover it. 00:23:28.673 [2024-12-09 18:11:36.597353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.673 [2024-12-09 18:11:36.597390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.673 [2024-12-09 18:11:36.597407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.673 [2024-12-09 18:11:36.597416] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.673 [2024-12-09 18:11:36.597425] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.673 [2024-12-09 18:11:36.607650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.673 qpair failed and we were unable to recover it. 00:23:28.673 [2024-12-09 18:11:36.617461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.673 [2024-12-09 18:11:36.617500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.673 [2024-12-09 18:11:36.617518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.673 [2024-12-09 18:11:36.617528] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.673 [2024-12-09 18:11:36.617536] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.673 [2024-12-09 18:11:36.627681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.673 qpair failed and we were unable to recover it. 00:23:28.673 [2024-12-09 18:11:36.637387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.673 [2024-12-09 18:11:36.637428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.673 [2024-12-09 18:11:36.637446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.673 [2024-12-09 18:11:36.637456] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.673 [2024-12-09 18:11:36.637464] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.673 [2024-12-09 18:11:36.647801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.673 qpair failed and we were unable to recover it. 00:23:28.932 [2024-12-09 18:11:36.657478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.932 [2024-12-09 18:11:36.657514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.932 [2024-12-09 18:11:36.657532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.932 [2024-12-09 18:11:36.657541] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.932 [2024-12-09 18:11:36.657549] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.932 [2024-12-09 18:11:36.667856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.932 qpair failed and we were unable to recover it. 00:23:28.932 [2024-12-09 18:11:36.677477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.932 [2024-12-09 18:11:36.677516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.932 [2024-12-09 18:11:36.677534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.932 [2024-12-09 18:11:36.677543] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.932 [2024-12-09 18:11:36.677551] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.932 [2024-12-09 18:11:36.687998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.932 qpair failed and we were unable to recover it. 00:23:28.932 [2024-12-09 18:11:36.697620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.932 [2024-12-09 18:11:36.697660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.932 [2024-12-09 18:11:36.697678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.932 [2024-12-09 18:11:36.697687] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.932 [2024-12-09 18:11:36.697695] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.932 [2024-12-09 18:11:36.708023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.932 qpair failed and we were unable to recover it. 00:23:28.932 [2024-12-09 18:11:36.717620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.932 [2024-12-09 18:11:36.717661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.932 [2024-12-09 18:11:36.717682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.932 [2024-12-09 18:11:36.717691] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.932 [2024-12-09 18:11:36.717700] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.932 [2024-12-09 18:11:36.727863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.932 qpair failed and we were unable to recover it. 00:23:28.932 [2024-12-09 18:11:36.737710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.932 [2024-12-09 18:11:36.737748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.932 [2024-12-09 18:11:36.737765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.932 [2024-12-09 18:11:36.737775] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.932 [2024-12-09 18:11:36.737783] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.932 [2024-12-09 18:11:36.748018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.932 qpair failed and we were unable to recover it. 00:23:28.932 [2024-12-09 18:11:36.757695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.932 [2024-12-09 18:11:36.757731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.932 [2024-12-09 18:11:36.757748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.932 [2024-12-09 18:11:36.757758] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.932 [2024-12-09 18:11:36.757766] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.932 [2024-12-09 18:11:36.768618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.932 qpair failed and we were unable to recover it. 00:23:28.932 [2024-12-09 18:11:36.777842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.932 [2024-12-09 18:11:36.777882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.932 [2024-12-09 18:11:36.777899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.932 [2024-12-09 18:11:36.777909] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.932 [2024-12-09 18:11:36.777917] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.932 [2024-12-09 18:11:36.788095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.932 qpair failed and we were unable to recover it. 00:23:28.932 [2024-12-09 18:11:36.797830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.932 [2024-12-09 18:11:36.797868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.932 [2024-12-09 18:11:36.797885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.932 [2024-12-09 18:11:36.797897] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.932 [2024-12-09 18:11:36.797905] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.932 [2024-12-09 18:11:36.808258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.932 qpair failed and we were unable to recover it. 00:23:28.932 [2024-12-09 18:11:36.817924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.932 [2024-12-09 18:11:36.817970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.932 [2024-12-09 18:11:36.817988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.932 [2024-12-09 18:11:36.817997] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.932 [2024-12-09 18:11:36.818006] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.932 [2024-12-09 18:11:36.828087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.932 qpair failed and we were unable to recover it. 00:23:28.932 [2024-12-09 18:11:36.837961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.932 [2024-12-09 18:11:36.837999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.932 [2024-12-09 18:11:36.838017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.932 [2024-12-09 18:11:36.838027] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.932 [2024-12-09 18:11:36.838035] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.932 [2024-12-09 18:11:36.848456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.932 qpair failed and we were unable to recover it. 00:23:28.932 [2024-12-09 18:11:36.858119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.932 [2024-12-09 18:11:36.858159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.932 [2024-12-09 18:11:36.858177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.932 [2024-12-09 18:11:36.858186] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.932 [2024-12-09 18:11:36.858195] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.932 [2024-12-09 18:11:36.868531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.932 qpair failed and we were unable to recover it. 00:23:28.932 [2024-12-09 18:11:36.878109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.932 [2024-12-09 18:11:36.878150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.932 [2024-12-09 18:11:36.878168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.932 [2024-12-09 18:11:36.878177] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.932 [2024-12-09 18:11:36.878186] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.933 [2024-12-09 18:11:36.888446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.933 qpair failed and we were unable to recover it. 00:23:28.933 [2024-12-09 18:11:36.898183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:28.933 [2024-12-09 18:11:36.898224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:28.933 [2024-12-09 18:11:36.898241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:28.933 [2024-12-09 18:11:36.898251] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:28.933 [2024-12-09 18:11:36.898259] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:28.933 [2024-12-09 18:11:36.908279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:28.933 qpair failed and we were unable to recover it. 00:23:29.191 [2024-12-09 18:11:36.918178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.191 [2024-12-09 18:11:36.918217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.191 [2024-12-09 18:11:36.918235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.191 [2024-12-09 18:11:36.918244] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.191 [2024-12-09 18:11:36.918253] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.191 [2024-12-09 18:11:36.928578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.191 qpair failed and we were unable to recover it. 00:23:29.191 [2024-12-09 18:11:36.938261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.191 [2024-12-09 18:11:36.938302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.191 [2024-12-09 18:11:36.938319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.191 [2024-12-09 18:11:36.938329] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.191 [2024-12-09 18:11:36.938338] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.191 [2024-12-09 18:11:36.948450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.191 qpair failed and we were unable to recover it. 00:23:29.191 [2024-12-09 18:11:36.958419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.191 [2024-12-09 18:11:36.958462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.191 [2024-12-09 18:11:36.958479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.191 [2024-12-09 18:11:36.958489] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.191 [2024-12-09 18:11:36.958497] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.191 [2024-12-09 18:11:36.968729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.191 qpair failed and we were unable to recover it. 00:23:29.191 [2024-12-09 18:11:36.978462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.191 [2024-12-09 18:11:36.978499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.191 [2024-12-09 18:11:36.978516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.191 [2024-12-09 18:11:36.978525] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.191 [2024-12-09 18:11:36.978534] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.191 [2024-12-09 18:11:36.988662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.191 qpair failed and we were unable to recover it. 00:23:29.191 [2024-12-09 18:11:36.998537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.191 [2024-12-09 18:11:36.998582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.191 [2024-12-09 18:11:36.998599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.191 [2024-12-09 18:11:36.998608] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.191 [2024-12-09 18:11:36.998617] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.191 [2024-12-09 18:11:37.008796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.191 qpair failed and we were unable to recover it. 00:23:29.191 [2024-12-09 18:11:37.018480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.191 [2024-12-09 18:11:37.018521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.191 [2024-12-09 18:11:37.018538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.191 [2024-12-09 18:11:37.018547] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.191 [2024-12-09 18:11:37.018556] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.191 [2024-12-09 18:11:37.028803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.191 qpair failed and we were unable to recover it. 00:23:29.191 [2024-12-09 18:11:37.038591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.191 [2024-12-09 18:11:37.038629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.191 [2024-12-09 18:11:37.038646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.191 [2024-12-09 18:11:37.038656] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.191 [2024-12-09 18:11:37.038664] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.191 [2024-12-09 18:11:37.048962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.191 qpair failed and we were unable to recover it. 00:23:29.191 [2024-12-09 18:11:37.058577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.191 [2024-12-09 18:11:37.058618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.191 [2024-12-09 18:11:37.058639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.191 [2024-12-09 18:11:37.058648] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.191 [2024-12-09 18:11:37.058657] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.192 [2024-12-09 18:11:37.068936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.192 qpair failed and we were unable to recover it. 00:23:29.192 [2024-12-09 18:11:37.078687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.192 [2024-12-09 18:11:37.078723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.192 [2024-12-09 18:11:37.078741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.192 [2024-12-09 18:11:37.078750] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.192 [2024-12-09 18:11:37.078759] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.192 [2024-12-09 18:11:37.089058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.192 qpair failed and we were unable to recover it. 00:23:29.192 [2024-12-09 18:11:37.098786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.192 [2024-12-09 18:11:37.098828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.192 [2024-12-09 18:11:37.098846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.192 [2024-12-09 18:11:37.098855] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.192 [2024-12-09 18:11:37.098864] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.192 [2024-12-09 18:11:37.109079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.192 qpair failed and we were unable to recover it. 00:23:29.192 [2024-12-09 18:11:37.118804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.192 [2024-12-09 18:11:37.118843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.192 [2024-12-09 18:11:37.118860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.192 [2024-12-09 18:11:37.118870] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.192 [2024-12-09 18:11:37.118878] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.192 [2024-12-09 18:11:37.129118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.192 qpair failed and we were unable to recover it. 00:23:29.192 [2024-12-09 18:11:37.138864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.192 [2024-12-09 18:11:37.138906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.192 [2024-12-09 18:11:37.138923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.192 [2024-12-09 18:11:37.138936] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.192 [2024-12-09 18:11:37.138945] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.192 [2024-12-09 18:11:37.149327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.192 qpair failed and we were unable to recover it. 00:23:29.192 [2024-12-09 18:11:37.159011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.192 [2024-12-09 18:11:37.159053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.192 [2024-12-09 18:11:37.159070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.192 [2024-12-09 18:11:37.159079] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.192 [2024-12-09 18:11:37.159088] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.451 [2024-12-09 18:11:37.169159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.451 qpair failed and we were unable to recover it. 00:23:29.451 [2024-12-09 18:11:37.179016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.451 [2024-12-09 18:11:37.179058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.451 [2024-12-09 18:11:37.179076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.451 [2024-12-09 18:11:37.179085] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.451 [2024-12-09 18:11:37.179094] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.451 [2024-12-09 18:11:37.189291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.451 qpair failed and we were unable to recover it. 00:23:29.451 [2024-12-09 18:11:37.199191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.451 [2024-12-09 18:11:37.199232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.451 [2024-12-09 18:11:37.199249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.451 [2024-12-09 18:11:37.199259] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.451 [2024-12-09 18:11:37.199267] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.451 [2024-12-09 18:11:37.209433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.451 qpair failed and we were unable to recover it. 00:23:29.451 [2024-12-09 18:11:37.219009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.451 [2024-12-09 18:11:37.219051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.451 [2024-12-09 18:11:37.219068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.451 [2024-12-09 18:11:37.219078] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.451 [2024-12-09 18:11:37.219086] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.451 [2024-12-09 18:11:37.229396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.451 qpair failed and we were unable to recover it. 00:23:29.451 [2024-12-09 18:11:37.239059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.451 [2024-12-09 18:11:37.239097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.451 [2024-12-09 18:11:37.239115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.451 [2024-12-09 18:11:37.239125] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.451 [2024-12-09 18:11:37.239133] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.451 [2024-12-09 18:11:37.249566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.451 qpair failed and we were unable to recover it. 00:23:29.451 [2024-12-09 18:11:37.259081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.451 [2024-12-09 18:11:37.259121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.451 [2024-12-09 18:11:37.259138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.451 [2024-12-09 18:11:37.259147] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.451 [2024-12-09 18:11:37.259156] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.451 [2024-12-09 18:11:37.269438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.451 qpair failed and we were unable to recover it. 00:23:29.451 [2024-12-09 18:11:37.279244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.451 [2024-12-09 18:11:37.279281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.451 [2024-12-09 18:11:37.279299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.451 [2024-12-09 18:11:37.279308] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.451 [2024-12-09 18:11:37.279317] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.451 [2024-12-09 18:11:37.289530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.451 qpair failed and we were unable to recover it. 00:23:29.451 [2024-12-09 18:11:37.299324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.451 [2024-12-09 18:11:37.299363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.451 [2024-12-09 18:11:37.299380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.451 [2024-12-09 18:11:37.299390] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.451 [2024-12-09 18:11:37.299398] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.451 [2024-12-09 18:11:37.309635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.451 qpair failed and we were unable to recover it. 00:23:29.451 [2024-12-09 18:11:37.319446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.451 [2024-12-09 18:11:37.319482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.451 [2024-12-09 18:11:37.319500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.451 [2024-12-09 18:11:37.319509] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.451 [2024-12-09 18:11:37.319518] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.451 [2024-12-09 18:11:37.329650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.451 qpair failed and we were unable to recover it. 00:23:29.451 [2024-12-09 18:11:37.339485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.451 [2024-12-09 18:11:37.339526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.451 [2024-12-09 18:11:37.339544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.451 [2024-12-09 18:11:37.339554] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.451 [2024-12-09 18:11:37.339562] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.451 [2024-12-09 18:11:37.349526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.451 qpair failed and we were unable to recover it. 00:23:29.451 [2024-12-09 18:11:37.359665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.451 [2024-12-09 18:11:37.359711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.451 [2024-12-09 18:11:37.359728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.451 [2024-12-09 18:11:37.359737] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.451 [2024-12-09 18:11:37.359746] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.451 [2024-12-09 18:11:37.369871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.452 qpair failed and we were unable to recover it. 00:23:29.452 [2024-12-09 18:11:37.379621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.452 [2024-12-09 18:11:37.379665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.452 [2024-12-09 18:11:37.379682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.452 [2024-12-09 18:11:37.379692] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.452 [2024-12-09 18:11:37.379700] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.452 [2024-12-09 18:11:37.389939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.452 qpair failed and we were unable to recover it. 00:23:29.452 [2024-12-09 18:11:37.399697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.452 [2024-12-09 18:11:37.399732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.452 [2024-12-09 18:11:37.399752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.452 [2024-12-09 18:11:37.399762] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.452 [2024-12-09 18:11:37.399770] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.452 [2024-12-09 18:11:37.410312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.452 qpair failed and we were unable to recover it. 00:23:29.452 [2024-12-09 18:11:37.419729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.452 [2024-12-09 18:11:37.419772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.452 [2024-12-09 18:11:37.419789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.452 [2024-12-09 18:11:37.419799] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.452 [2024-12-09 18:11:37.419807] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.710 [2024-12-09 18:11:37.429897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-12-09 18:11:37.439774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.710 [2024-12-09 18:11:37.439813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.710 [2024-12-09 18:11:37.439830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.710 [2024-12-09 18:11:37.439840] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.710 [2024-12-09 18:11:37.439848] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.710 [2024-12-09 18:11:37.450342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-12-09 18:11:37.459825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.710 [2024-12-09 18:11:37.459867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.710 [2024-12-09 18:11:37.459884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.710 [2024-12-09 18:11:37.459894] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.710 [2024-12-09 18:11:37.459902] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.710 [2024-12-09 18:11:37.470120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-12-09 18:11:37.480011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.710 [2024-12-09 18:11:37.480054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.710 [2024-12-09 18:11:37.480071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.710 [2024-12-09 18:11:37.480080] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.710 [2024-12-09 18:11:37.480094] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.710 [2024-12-09 18:11:37.490221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-12-09 18:11:37.500032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.710 [2024-12-09 18:11:37.500075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.710 [2024-12-09 18:11:37.500093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.710 [2024-12-09 18:11:37.500102] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.710 [2024-12-09 18:11:37.500111] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.710 [2024-12-09 18:11:37.510113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-12-09 18:11:37.519992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.710 [2024-12-09 18:11:37.520036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.710 [2024-12-09 18:11:37.520053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.710 [2024-12-09 18:11:37.520063] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.710 [2024-12-09 18:11:37.520072] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.710 [2024-12-09 18:11:37.530206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-12-09 18:11:37.540045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.710 [2024-12-09 18:11:37.540081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.710 [2024-12-09 18:11:37.540099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.710 [2024-12-09 18:11:37.540108] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.710 [2024-12-09 18:11:37.540117] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.710 [2024-12-09 18:11:37.550331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-12-09 18:11:37.560236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.710 [2024-12-09 18:11:37.560274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.710 [2024-12-09 18:11:37.560291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.710 [2024-12-09 18:11:37.560301] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.710 [2024-12-09 18:11:37.560311] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.710 [2024-12-09 18:11:37.570515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-12-09 18:11:37.580208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.710 [2024-12-09 18:11:37.580250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.710 [2024-12-09 18:11:37.580268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.710 [2024-12-09 18:11:37.580277] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.710 [2024-12-09 18:11:37.580286] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.710 [2024-12-09 18:11:37.590381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-12-09 18:11:37.600388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.710 [2024-12-09 18:11:37.600434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.710 [2024-12-09 18:11:37.600452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.710 [2024-12-09 18:11:37.600462] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.710 [2024-12-09 18:11:37.600471] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.710 [2024-12-09 18:11:37.610427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-12-09 18:11:37.620333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.710 [2024-12-09 18:11:37.620375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.710 [2024-12-09 18:11:37.620393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.710 [2024-12-09 18:11:37.620402] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.710 [2024-12-09 18:11:37.620411] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.710 [2024-12-09 18:11:37.630668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-12-09 18:11:37.640403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.710 [2024-12-09 18:11:37.640446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.710 [2024-12-09 18:11:37.640464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.710 [2024-12-09 18:11:37.640473] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.710 [2024-12-09 18:11:37.640483] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.710 [2024-12-09 18:11:37.650838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.710 qpair failed and we were unable to recover it. 00:23:29.710 [2024-12-09 18:11:37.660498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.710 [2024-12-09 18:11:37.660544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.710 [2024-12-09 18:11:37.660562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.710 [2024-12-09 18:11:37.660571] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.710 [2024-12-09 18:11:37.660580] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.710 [2024-12-09 18:11:37.670488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.711 qpair failed and we were unable to recover it. 00:23:29.711 [2024-12-09 18:11:37.680398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.711 [2024-12-09 18:11:37.680443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.711 [2024-12-09 18:11:37.680461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.711 [2024-12-09 18:11:37.680470] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.711 [2024-12-09 18:11:37.680479] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.969 [2024-12-09 18:11:37.690656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.969 qpair failed and we were unable to recover it. 00:23:29.969 [2024-12-09 18:11:37.700599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.969 [2024-12-09 18:11:37.700639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.969 [2024-12-09 18:11:37.700656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.969 [2024-12-09 18:11:37.700666] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.969 [2024-12-09 18:11:37.700674] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.969 [2024-12-09 18:11:37.710783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.969 qpair failed and we were unable to recover it. 00:23:29.969 [2024-12-09 18:11:37.720606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.969 [2024-12-09 18:11:37.720645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.969 [2024-12-09 18:11:37.720663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.969 [2024-12-09 18:11:37.720673] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.969 [2024-12-09 18:11:37.720681] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.969 [2024-12-09 18:11:37.730846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.969 qpair failed and we were unable to recover it. 00:23:29.969 [2024-12-09 18:11:37.740750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.969 [2024-12-09 18:11:37.740791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.969 [2024-12-09 18:11:37.740812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.970 [2024-12-09 18:11:37.740821] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.970 [2024-12-09 18:11:37.740830] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.970 [2024-12-09 18:11:37.751009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.970 qpair failed and we were unable to recover it. 00:23:29.970 [2024-12-09 18:11:37.760677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.970 [2024-12-09 18:11:37.760718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.970 [2024-12-09 18:11:37.760735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.970 [2024-12-09 18:11:37.760744] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.970 [2024-12-09 18:11:37.760753] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.970 [2024-12-09 18:11:37.771137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.970 qpair failed and we were unable to recover it. 00:23:29.970 [2024-12-09 18:11:37.780688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.970 [2024-12-09 18:11:37.780732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.970 [2024-12-09 18:11:37.780750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.970 [2024-12-09 18:11:37.780759] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.970 [2024-12-09 18:11:37.780768] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.970 [2024-12-09 18:11:37.791084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.970 qpair failed and we were unable to recover it. 00:23:29.970 [2024-12-09 18:11:37.800873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.970 [2024-12-09 18:11:37.800910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.970 [2024-12-09 18:11:37.800927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.970 [2024-12-09 18:11:37.800936] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.970 [2024-12-09 18:11:37.800945] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.970 [2024-12-09 18:11:37.811171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.970 qpair failed and we were unable to recover it. 00:23:29.970 [2024-12-09 18:11:37.820886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.970 [2024-12-09 18:11:37.820926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.970 [2024-12-09 18:11:37.820944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.970 [2024-12-09 18:11:37.820964] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.970 [2024-12-09 18:11:37.820976] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.970 [2024-12-09 18:11:37.831067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.970 qpair failed and we were unable to recover it. 00:23:29.970 [2024-12-09 18:11:37.840860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.970 [2024-12-09 18:11:37.840903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.970 [2024-12-09 18:11:37.840921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.970 [2024-12-09 18:11:37.840930] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.970 [2024-12-09 18:11:37.840939] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.970 [2024-12-09 18:11:37.851364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.970 qpair failed and we were unable to recover it. 00:23:29.970 [2024-12-09 18:11:37.860968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.970 [2024-12-09 18:11:37.861010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.970 [2024-12-09 18:11:37.861028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.970 [2024-12-09 18:11:37.861037] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.970 [2024-12-09 18:11:37.861046] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.970 [2024-12-09 18:11:37.871291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.970 qpair failed and we were unable to recover it. 00:23:29.970 [2024-12-09 18:11:37.881051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.970 [2024-12-09 18:11:37.881091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.970 [2024-12-09 18:11:37.881109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.970 [2024-12-09 18:11:37.881118] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.970 [2024-12-09 18:11:37.881127] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.970 [2024-12-09 18:11:37.891228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.970 qpair failed and we were unable to recover it. 00:23:29.970 [2024-12-09 18:11:37.901161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.970 [2024-12-09 18:11:37.901203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.970 [2024-12-09 18:11:37.901220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.970 [2024-12-09 18:11:37.901230] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.970 [2024-12-09 18:11:37.901238] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.970 [2024-12-09 18:11:37.911514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.970 qpair failed and we were unable to recover it. 00:23:29.970 [2024-12-09 18:11:37.921150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.970 [2024-12-09 18:11:37.921188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.970 [2024-12-09 18:11:37.921206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.970 [2024-12-09 18:11:37.921215] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.970 [2024-12-09 18:11:37.921224] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:29.970 [2024-12-09 18:11:37.931456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.970 qpair failed and we were unable to recover it. 00:23:29.970 [2024-12-09 18:11:37.941381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:29.970 [2024-12-09 18:11:37.941417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:29.970 [2024-12-09 18:11:37.941435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:29.970 [2024-12-09 18:11:37.941444] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:29.970 [2024-12-09 18:11:37.941452] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.229 [2024-12-09 18:11:37.951518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.229 qpair failed and we were unable to recover it. 00:23:30.229 [2024-12-09 18:11:37.961464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.229 [2024-12-09 18:11:37.961503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.229 [2024-12-09 18:11:37.961520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.229 [2024-12-09 18:11:37.961530] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.229 [2024-12-09 18:11:37.961538] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.229 [2024-12-09 18:11:37.971769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.229 qpair failed and we were unable to recover it. 00:23:30.229 [2024-12-09 18:11:37.981396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.229 [2024-12-09 18:11:37.981437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.229 [2024-12-09 18:11:37.981454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.229 [2024-12-09 18:11:37.981464] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.229 [2024-12-09 18:11:37.981472] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.229 [2024-12-09 18:11:37.991547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.229 qpair failed and we were unable to recover it. 00:23:30.229 [2024-12-09 18:11:38.001513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.229 [2024-12-09 18:11:38.001562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.229 [2024-12-09 18:11:38.001579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.229 [2024-12-09 18:11:38.001588] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.229 [2024-12-09 18:11:38.001596] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.229 [2024-12-09 18:11:38.011768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.229 qpair failed and we were unable to recover it. 00:23:30.229 [2024-12-09 18:11:38.021385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.229 [2024-12-09 18:11:38.021421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.229 [2024-12-09 18:11:38.021438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.229 [2024-12-09 18:11:38.021447] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.229 [2024-12-09 18:11:38.021456] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.229 [2024-12-09 18:11:38.031589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.229 qpair failed and we were unable to recover it. 00:23:30.229 [2024-12-09 18:11:38.041524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.229 [2024-12-09 18:11:38.041563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.229 [2024-12-09 18:11:38.041580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.229 [2024-12-09 18:11:38.041589] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.229 [2024-12-09 18:11:38.041598] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.229 [2024-12-09 18:11:38.052228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.229 qpair failed and we were unable to recover it. 00:23:30.229 [2024-12-09 18:11:38.061468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.229 [2024-12-09 18:11:38.061511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.229 [2024-12-09 18:11:38.061529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.229 [2024-12-09 18:11:38.061538] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.229 [2024-12-09 18:11:38.061547] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.229 [2024-12-09 18:11:38.071902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.229 qpair failed and we were unable to recover it. 00:23:30.229 [2024-12-09 18:11:38.081701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.229 [2024-12-09 18:11:38.081746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.229 [2024-12-09 18:11:38.081766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.229 [2024-12-09 18:11:38.081776] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.229 [2024-12-09 18:11:38.081784] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.229 [2024-12-09 18:11:38.091989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.229 qpair failed and we were unable to recover it. 00:23:30.229 [2024-12-09 18:11:38.101606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.229 [2024-12-09 18:11:38.101642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.229 [2024-12-09 18:11:38.101659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.229 [2024-12-09 18:11:38.101668] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.229 [2024-12-09 18:11:38.101676] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.229 [2024-12-09 18:11:38.111960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.230 qpair failed and we were unable to recover it. 00:23:30.230 [2024-12-09 18:11:38.121704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.230 [2024-12-09 18:11:38.121747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.230 [2024-12-09 18:11:38.121765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.230 [2024-12-09 18:11:38.121774] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.230 [2024-12-09 18:11:38.121783] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.230 [2024-12-09 18:11:38.131892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.230 qpair failed and we were unable to recover it. 00:23:30.230 [2024-12-09 18:11:38.141736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.230 [2024-12-09 18:11:38.141780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.230 [2024-12-09 18:11:38.141797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.230 [2024-12-09 18:11:38.141806] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.230 [2024-12-09 18:11:38.141815] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.230 [2024-12-09 18:11:38.151996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.230 qpair failed and we were unable to recover it. 00:23:30.230 [2024-12-09 18:11:38.161796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.230 [2024-12-09 18:11:38.161839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.230 [2024-12-09 18:11:38.161856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.230 [2024-12-09 18:11:38.161866] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.230 [2024-12-09 18:11:38.161877] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.230 [2024-12-09 18:11:38.172261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.230 qpair failed and we were unable to recover it. 00:23:30.230 [2024-12-09 18:11:38.181830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.230 [2024-12-09 18:11:38.181868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.230 [2024-12-09 18:11:38.181886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.230 [2024-12-09 18:11:38.181895] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.230 [2024-12-09 18:11:38.181903] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.230 [2024-12-09 18:11:38.192072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.230 qpair failed and we were unable to recover it. 00:23:30.230 [2024-12-09 18:11:38.202092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.230 [2024-12-09 18:11:38.202130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.230 [2024-12-09 18:11:38.202149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.230 [2024-12-09 18:11:38.202158] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.230 [2024-12-09 18:11:38.202167] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.489 [2024-12-09 18:11:38.212380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.489 qpair failed and we were unable to recover it. 00:23:30.489 [2024-12-09 18:11:38.221980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.489 [2024-12-09 18:11:38.222021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.489 [2024-12-09 18:11:38.222039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.489 [2024-12-09 18:11:38.222048] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.489 [2024-12-09 18:11:38.222057] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.489 [2024-12-09 18:11:38.232363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.489 qpair failed and we were unable to recover it. 00:23:30.489 [2024-12-09 18:11:38.242050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.489 [2024-12-09 18:11:38.242089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.489 [2024-12-09 18:11:38.242107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.489 [2024-12-09 18:11:38.242116] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.489 [2024-12-09 18:11:38.242125] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.489 [2024-12-09 18:11:38.252440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.489 qpair failed and we were unable to recover it. 00:23:30.489 [2024-12-09 18:11:38.262143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.489 [2024-12-09 18:11:38.262190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.489 [2024-12-09 18:11:38.262207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.489 [2024-12-09 18:11:38.262216] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.489 [2024-12-09 18:11:38.262225] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.489 [2024-12-09 18:11:38.272471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.489 qpair failed and we were unable to recover it. 00:23:30.489 [2024-12-09 18:11:38.282205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.489 [2024-12-09 18:11:38.282246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.489 [2024-12-09 18:11:38.282263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.489 [2024-12-09 18:11:38.282272] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.489 [2024-12-09 18:11:38.282280] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.489 [2024-12-09 18:11:38.292696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.489 qpair failed and we were unable to recover it. 00:23:30.489 [2024-12-09 18:11:38.302352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.489 [2024-12-09 18:11:38.302393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.489 [2024-12-09 18:11:38.302411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.489 [2024-12-09 18:11:38.302420] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.489 [2024-12-09 18:11:38.302429] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.489 [2024-12-09 18:11:38.312550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.489 qpair failed and we were unable to recover it. 00:23:30.489 [2024-12-09 18:11:38.322344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.489 [2024-12-09 18:11:38.322383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.489 [2024-12-09 18:11:38.322401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.489 [2024-12-09 18:11:38.322410] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.489 [2024-12-09 18:11:38.322419] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.489 [2024-12-09 18:11:38.332739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.489 qpair failed and we were unable to recover it. 00:23:30.489 [2024-12-09 18:11:38.342411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.489 [2024-12-09 18:11:38.342456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.489 [2024-12-09 18:11:38.342473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.489 [2024-12-09 18:11:38.342483] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.489 [2024-12-09 18:11:38.342491] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.489 [2024-12-09 18:11:38.352683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.489 qpair failed and we were unable to recover it. 00:23:30.489 [2024-12-09 18:11:38.362476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.489 [2024-12-09 18:11:38.362511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.489 [2024-12-09 18:11:38.362528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.489 [2024-12-09 18:11:38.362538] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.489 [2024-12-09 18:11:38.362546] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.489 [2024-12-09 18:11:38.372727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.489 qpair failed and we were unable to recover it. 00:23:30.489 [2024-12-09 18:11:38.382587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.489 [2024-12-09 18:11:38.382629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.489 [2024-12-09 18:11:38.382647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.489 [2024-12-09 18:11:38.382656] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.489 [2024-12-09 18:11:38.382665] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.489 [2024-12-09 18:11:38.392895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.489 qpair failed and we were unable to recover it. 00:23:30.489 [2024-12-09 18:11:38.402520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.489 [2024-12-09 18:11:38.402564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.489 [2024-12-09 18:11:38.402582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.489 [2024-12-09 18:11:38.402591] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.489 [2024-12-09 18:11:38.402599] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.489 [2024-12-09 18:11:38.413022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.489 qpair failed and we were unable to recover it. 00:23:30.489 [2024-12-09 18:11:38.422699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.489 [2024-12-09 18:11:38.422738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.489 [2024-12-09 18:11:38.422755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.489 [2024-12-09 18:11:38.422768] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.489 [2024-12-09 18:11:38.422776] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.489 [2024-12-09 18:11:38.432900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.489 qpair failed and we were unable to recover it. 00:23:30.489 [2024-12-09 18:11:38.442708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.489 [2024-12-09 18:11:38.442747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.489 [2024-12-09 18:11:38.442764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.489 [2024-12-09 18:11:38.442773] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.489 [2024-12-09 18:11:38.442782] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.489 [2024-12-09 18:11:38.453055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.489 qpair failed and we were unable to recover it. 00:23:30.489 [2024-12-09 18:11:38.462733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.489 [2024-12-09 18:11:38.462773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.489 [2024-12-09 18:11:38.462791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.489 [2024-12-09 18:11:38.462800] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.490 [2024-12-09 18:11:38.462809] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.748 [2024-12-09 18:11:38.473083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.748 qpair failed and we were unable to recover it. 00:23:30.748 [2024-12-09 18:11:38.482757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.748 [2024-12-09 18:11:38.482801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.748 [2024-12-09 18:11:38.482819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.748 [2024-12-09 18:11:38.482830] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.748 [2024-12-09 18:11:38.482840] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.748 [2024-12-09 18:11:38.493039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.748 qpair failed and we were unable to recover it. 00:23:30.748 [2024-12-09 18:11:38.502835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.748 [2024-12-09 18:11:38.502878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.748 [2024-12-09 18:11:38.502895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.748 [2024-12-09 18:11:38.502904] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.748 [2024-12-09 18:11:38.502913] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.748 [2024-12-09 18:11:38.513159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.748 qpair failed and we were unable to recover it. 00:23:30.748 [2024-12-09 18:11:38.522969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.748 [2024-12-09 18:11:38.523006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.748 [2024-12-09 18:11:38.523024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.748 [2024-12-09 18:11:38.523033] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.748 [2024-12-09 18:11:38.523042] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.748 [2024-12-09 18:11:38.533401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.748 qpair failed and we were unable to recover it. 00:23:30.748 [2024-12-09 18:11:38.542932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.748 [2024-12-09 18:11:38.542977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.748 [2024-12-09 18:11:38.542994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.748 [2024-12-09 18:11:38.543004] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.748 [2024-12-09 18:11:38.543012] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.748 [2024-12-09 18:11:38.553339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.748 qpair failed and we were unable to recover it. 00:23:30.748 [2024-12-09 18:11:38.562981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.748 [2024-12-09 18:11:38.563025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.748 [2024-12-09 18:11:38.563042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.748 [2024-12-09 18:11:38.563052] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.748 [2024-12-09 18:11:38.563060] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.748 [2024-12-09 18:11:38.573370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.748 qpair failed and we were unable to recover it. 00:23:30.748 [2024-12-09 18:11:38.583031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.748 [2024-12-09 18:11:38.583074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.748 [2024-12-09 18:11:38.583091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.748 [2024-12-09 18:11:38.583100] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.748 [2024-12-09 18:11:38.583110] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.748 [2024-12-09 18:11:38.593532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.748 qpair failed and we were unable to recover it. 00:23:30.748 [2024-12-09 18:11:38.603121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.748 [2024-12-09 18:11:38.603163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.748 [2024-12-09 18:11:38.603181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.748 [2024-12-09 18:11:38.603190] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.748 [2024-12-09 18:11:38.603198] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.748 [2024-12-09 18:11:38.613587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.748 qpair failed and we were unable to recover it. 00:23:30.748 [2024-12-09 18:11:38.623266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.748 [2024-12-09 18:11:38.623307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.748 [2024-12-09 18:11:38.623325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.748 [2024-12-09 18:11:38.623334] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.748 [2024-12-09 18:11:38.623343] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:30.748 [2024-12-09 18:11:38.633510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:30.748 qpair failed and we were unable to recover it. 00:23:30.748 [2024-12-09 18:11:38.633644] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:23:30.748 A controller has encountered a failure and is being reset. 00:23:30.748 [2024-12-09 18:11:38.643744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.748 [2024-12-09 18:11:38.643802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.748 [2024-12-09 18:11:38.643862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.748 [2024-12-09 18:11:38.643896] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.748 [2024-12-09 18:11:38.643928] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d2b40 00:23:30.748 [2024-12-09 18:11:38.653827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:30.748 qpair failed and we were unable to recover it. 00:23:30.748 [2024-12-09 18:11:38.663467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:30.748 [2024-12-09 18:11:38.663515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:30.748 [2024-12-09 18:11:38.663549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:30.748 [2024-12-09 18:11:38.663570] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:30.748 [2024-12-09 18:11:38.663589] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d2b40 00:23:30.748 [2024-12-09 18:11:38.673655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:30.748 qpair failed and we were unable to recover it. 00:23:30.748 [2024-12-09 18:11:38.673833] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:23:30.748 [2024-12-09 18:11:38.675810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:23:30.748 Controller properly reset. 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed 00:23:32.119 Read completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed 00:23:32.119 Read completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed 00:23:32.119 Read completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed 00:23:32.119 Write completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed 00:23:32.119 Read completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed 00:23:32.119 Read completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed 00:23:32.119 Read completed with error (sct=0, sc=8) 00:23:32.119 starting I/O failed 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Read completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Read completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Read completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Read completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Read completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Read completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Read completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Read completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Read completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Write completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Read completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 Read completed with error (sct=0, sc=8) 00:23:32.120 starting I/O failed 00:23:32.120 [2024-12-09 18:11:39.689623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:33.053 Read completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Write completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Write completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Read completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Write completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Write completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Write completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Write completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Write completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Read completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Read completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Write completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Write completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Read completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Read completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Write completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Write completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Read completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Read completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Read completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Write completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Read completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Write completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Write completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Read completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Read completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Read completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Read completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Write completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Read completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Write completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 Read completed with error (sct=0, sc=8) 00:23:33.054 starting I/O failed 00:23:33.054 [2024-12-09 18:11:40.695114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:33.054 Initializing NVMe Controllers 00:23:33.054 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:33.054 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:33.054 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:23:33.054 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:23:33.054 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:23:33.054 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:23:33.054 Initialization complete. Launching workers. 00:23:33.054 Starting thread on core 1 00:23:33.054 Starting thread on core 2 00:23:33.054 Starting thread on core 3 00:23:33.054 Starting thread on core 0 00:23:33.054 18:11:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:23:33.054 00:23:33.054 real 0m12.943s 00:23:33.054 user 0m26.529s 00:23:33.054 sys 0m3.347s 00:23:33.054 18:11:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.054 18:11:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:33.054 ************************************ 00:23:33.054 END TEST nvmf_target_disconnect_tc2 00:23:33.054 ************************************ 00:23:33.054 18:11:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:23:33.054 18:11:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:23:33.054 18:11:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:33.054 18:11:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:33.054 18:11:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:33.054 ************************************ 00:23:33.054 START TEST nvmf_target_disconnect_tc3 00:23:33.054 ************************************ 00:23:33.054 18:11:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc3 00:23:33.054 18:11:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=2460672 00:23:33.054 18:11:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:23:33.054 18:11:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:23:34.951 18:11:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 2459287 00:23:34.951 18:11:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Write completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Write completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Write completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Write completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Write completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Write completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Write completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Write completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Write completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Write completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Write completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Write completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 Read completed with error (sct=0, sc=8) 00:23:36.329 starting I/O failed 00:23:36.329 [2024-12-09 18:11:44.046281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:23:36.897 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 2459287 Killed "${NVMF_APP[@]}" "$@" 00:23:36.897 18:11:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:23:36.897 18:11:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:23:36.897 18:11:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:36.897 18:11:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.897 18:11:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:36.897 18:11:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2461454 00:23:36.897 18:11:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2461454 00:23:36.897 18:11:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:23:36.897 18:11:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2461454 ']' 00:23:36.897 18:11:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.897 18:11:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.897 18:11:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.897 18:11:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.897 18:11:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:37.156 [2024-12-09 18:11:44.907363] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:23:37.156 [2024-12-09 18:11:44.907415] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.156 [2024-12-09 18:11:44.999337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:37.156 [2024-12-09 18:11:45.036724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.156 [2024-12-09 18:11:45.036765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.156 [2024-12-09 18:11:45.036774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.156 [2024-12-09 18:11:45.036783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.156 [2024-12-09 18:11:45.036790] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.156 [2024-12-09 18:11:45.038639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:37.156 [2024-12-09 18:11:45.038753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:37.156 [2024-12-09 18:11:45.038860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:37.156 [2024-12-09 18:11:45.038862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:23:37.156 Write completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Read completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Write completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Read completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Read completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Write completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Read completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Read completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Write completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Read completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Write completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Read completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Read completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Read completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Write completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Write completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Write completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Read completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Write completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Write completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Write completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Read completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Read completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Read completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Write completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Read completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Read completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Read completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Read completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Write completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Write completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 Write completed with error (sct=0, sc=8) 00:23:37.156 starting I/O failed 00:23:37.156 [2024-12-09 18:11:45.051429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:23:37.156 [2024-12-09 18:11:45.053022] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:37.156 [2024-12-09 18:11:45.053044] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:37.156 [2024-12-09 18:11:45.053053] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:38.090 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.090 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.091 Malloc0 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.091 [2024-12-09 18:11:45.858041] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c1aa30/0x1c26790) succeed. 00:23:38.091 [2024-12-09 18:11:45.867725] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c1c0c0/0x1c67e30) succeed. 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.091 18:11:45 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.091 18:11:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.091 18:11:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:23:38.091 18:11:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.091 18:11:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.091 [2024-12-09 18:11:46.016363] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:23:38.091 18:11:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.091 18:11:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:23:38.091 18:11:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.091 18:11:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.091 18:11:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.091 18:11:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 2460672 00:23:38.091 [2024-12-09 18:11:46.057052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:23:38.091 qpair failed and we were unable to recover it. 00:23:38.091 [2024-12-09 18:11:46.058621] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:38.091 [2024-12-09 18:11:46.058641] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:38.091 [2024-12-09 18:11:46.058649] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:39.465 [2024-12-09 18:11:47.062527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:23:39.465 qpair failed and we were unable to recover it. 00:23:39.465 [2024-12-09 18:11:47.064067] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:39.465 [2024-12-09 18:11:47.064085] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:39.465 [2024-12-09 18:11:47.064093] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:40.398 [2024-12-09 18:11:48.068032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:23:40.398 qpair failed and we were unable to recover it. 00:23:40.398 [2024-12-09 18:11:48.069566] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:40.398 [2024-12-09 18:11:48.069584] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:40.398 [2024-12-09 18:11:48.069592] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:41.385 [2024-12-09 18:11:49.073516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:23:41.385 qpair failed and we were unable to recover it. 00:23:41.385 [2024-12-09 18:11:49.075102] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:41.385 [2024-12-09 18:11:49.075121] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:41.385 [2024-12-09 18:11:49.075129] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:42.319 [2024-12-09 18:11:50.079218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:23:42.319 qpair failed and we were unable to recover it. 00:23:42.319 [2024-12-09 18:11:50.080677] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:42.319 [2024-12-09 18:11:50.080697] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:42.319 [2024-12-09 18:11:50.080706] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:43.251 [2024-12-09 18:11:51.084682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:23:43.251 qpair failed and we were unable to recover it. 00:23:43.251 [2024-12-09 18:11:51.086196] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:43.251 [2024-12-09 18:11:51.086220] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:43.251 [2024-12-09 18:11:51.086229] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:23:44.183 [2024-12-09 18:11:52.090134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:23:44.183 qpair failed and we were unable to recover it. 00:23:45.554 Read completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Write completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Write completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Read completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Write completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Read completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Read completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Write completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Read completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Write completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Write completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Write completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Read completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Read completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Read completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Write completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Read completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Read completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Read completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Write completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Write completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Read completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Write completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Write completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Write completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Write completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Write completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Read completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Read completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Write completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Write completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 Read completed with error (sct=0, sc=8) 00:23:45.554 starting I/O failed 00:23:45.554 [2024-12-09 18:11:53.095276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:23:45.554 [2024-12-09 18:11:53.096825] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:45.554 [2024-12-09 18:11:53.096845] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:45.554 [2024-12-09 18:11:53.096853] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:23:46.486 [2024-12-09 18:11:54.100746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:23:46.486 qpair failed and we were unable to recover it. 00:23:46.486 [2024-12-09 18:11:54.102393] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:46.486 [2024-12-09 18:11:54.102412] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:46.486 [2024-12-09 18:11:54.102420] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:23:47.417 [2024-12-09 18:11:55.106341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:23:47.417 qpair failed and we were unable to recover it. 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Read completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Read completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Read completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Read completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Read completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Read completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Read completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Read completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Read completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Read completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Read completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Read completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 Write completed with error (sct=0, sc=8) 00:23:48.351 starting I/O failed 00:23:48.351 [2024-12-09 18:11:56.111447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:23:48.351 [2024-12-09 18:11:56.111473] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:23:48.351 A controller has encountered a failure and is being reset. 00:23:48.351 Resorting to new failover address 192.168.100.9 00:23:48.351 [2024-12-09 18:11:56.111575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:48.351 [2024-12-09 18:11:56.111643] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:23:48.351 [2024-12-09 18:11:56.144771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:23:48.351 Controller properly reset. 00:23:48.351 Initializing NVMe Controllers 00:23:48.351 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:48.351 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:48.351 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:23:48.351 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:23:48.351 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:23:48.351 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:23:48.351 Initialization complete. Launching workers. 00:23:48.351 Starting thread on core 1 00:23:48.351 Starting thread on core 2 00:23:48.351 Starting thread on core 3 00:23:48.351 Starting thread on core 0 00:23:48.351 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:23:48.351 00:23:48.351 real 0m15.389s 00:23:48.351 user 1m1.437s 00:23:48.351 sys 0m4.679s 00:23:48.351 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.351 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:48.351 ************************************ 00:23:48.351 END TEST nvmf_target_disconnect_tc3 00:23:48.351 ************************************ 00:23:48.351 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:23:48.351 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:23:48.351 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:48.351 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:23:48.351 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:48.351 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:48.351 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:23:48.351 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:48.351 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:48.351 rmmod nvme_rdma 00:23:48.351 rmmod nvme_fabrics 00:23:48.351 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:48.610 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:23:48.610 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:23:48.610 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2461454 ']' 00:23:48.610 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2461454 00:23:48.610 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2461454 ']' 00:23:48.610 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2461454 00:23:48.610 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:23:48.610 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.610 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2461454 00:23:48.610 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:23:48.610 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:23:48.610 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2461454' 00:23:48.610 killing process with pid 2461454 00:23:48.610 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2461454 00:23:48.610 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2461454 00:23:48.873 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:48.873 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:23:48.873 00:23:48.873 real 0m37.638s 00:23:48.873 user 2m17.099s 00:23:48.873 sys 0m14.360s 00:23:48.873 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.873 18:11:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:48.873 ************************************ 00:23:48.873 END TEST nvmf_target_disconnect 00:23:48.873 ************************************ 00:23:48.873 18:11:56 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:48.873 00:23:48.873 real 5m35.774s 00:23:48.873 user 13m0.874s 00:23:48.873 sys 1m46.310s 00:23:48.873 18:11:56 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.873 18:11:56 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.873 ************************************ 00:23:48.873 END TEST nvmf_host 00:23:48.873 ************************************ 00:23:48.873 18:11:56 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:23:48.873 00:23:48.873 real 18m4.304s 00:23:48.873 user 43m6.376s 00:23:48.873 sys 5m52.265s 00:23:48.873 18:11:56 nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.873 18:11:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:48.873 ************************************ 00:23:48.873 END TEST nvmf_rdma 00:23:48.873 ************************************ 00:23:48.873 18:11:56 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:23:48.873 18:11:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:48.873 18:11:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:48.873 18:11:56 -- common/autotest_common.sh@10 -- # set +x 00:23:48.873 ************************************ 00:23:48.873 START TEST spdkcli_nvmf_rdma 00:23:48.873 ************************************ 00:23:48.873 18:11:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:23:49.132 * Looking for test storage... 00:23:49.132 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:23:49.132 18:11:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:49.132 18:11:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # lcov --version 00:23:49.132 18:11:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:49.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.132 --rc genhtml_branch_coverage=1 00:23:49.132 --rc genhtml_function_coverage=1 00:23:49.132 --rc genhtml_legend=1 00:23:49.132 --rc geninfo_all_blocks=1 00:23:49.132 --rc geninfo_unexecuted_blocks=1 00:23:49.132 00:23:49.132 ' 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:49.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.132 --rc genhtml_branch_coverage=1 00:23:49.132 --rc genhtml_function_coverage=1 00:23:49.132 --rc genhtml_legend=1 00:23:49.132 --rc geninfo_all_blocks=1 00:23:49.132 --rc geninfo_unexecuted_blocks=1 00:23:49.132 00:23:49.132 ' 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:49.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.132 --rc genhtml_branch_coverage=1 00:23:49.132 --rc genhtml_function_coverage=1 00:23:49.132 --rc genhtml_legend=1 00:23:49.132 --rc geninfo_all_blocks=1 00:23:49.132 --rc geninfo_unexecuted_blocks=1 00:23:49.132 00:23:49.132 ' 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:49.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.132 --rc genhtml_branch_coverage=1 00:23:49.132 --rc genhtml_function_coverage=1 00:23:49.132 --rc genhtml_legend=1 00:23:49.132 --rc geninfo_all_blocks=1 00:23:49.132 --rc geninfo_unexecuted_blocks=1 00:23:49.132 00:23:49.132 ' 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:49.132 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2463555 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 2463555 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # '[' -z 2463555 ']' 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.132 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:49.391 [2024-12-09 18:11:57.117107] Starting SPDK v25.01-pre git sha1 2e1d23f4b / DPDK 24.03.0 initialization... 00:23:49.391 [2024-12-09 18:11:57.117162] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2463555 ] 00:23:49.391 [2024-12-09 18:11:57.205955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:49.391 [2024-12-09 18:11:57.248056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.391 [2024-12-09 18:11:57.248057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.391 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.391 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@868 -- # return 0 00:23:49.391 18:11:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:23:49.391 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:49.391 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:49.649 18:11:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:23:49.649 18:11:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:23:49.649 18:11:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:23:49.649 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:23:49.649 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.649 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:49.649 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:49.649 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:49.649 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.649 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:49.649 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.649 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:49.649 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:49.649 18:11:57 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:23:49.649 18:11:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:56.216 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:56.216 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:56.216 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:56.216 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:56.216 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:56.476 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:56.476 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:56.476 altname enp217s0f0np0 00:23:56.476 altname ens818f0np0 00:23:56.476 inet 192.168.100.8/24 scope global mlx_0_0 00:23:56.476 valid_lft forever preferred_lft forever 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:56.476 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:56.476 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:56.476 altname enp217s0f1np1 00:23:56.476 altname ens818f1np1 00:23:56.476 inet 192.168.100.9/24 scope global mlx_0_1 00:23:56.476 valid_lft forever preferred_lft forever 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:56.476 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:23:56.477 192.168.100.9' 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:23:56.477 192.168.100.9' 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:23:56.477 192.168.100.9' 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:56.477 18:12:04 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:23:56.477 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:23:56.477 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:23:56.477 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:23:56.477 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:23:56.477 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:23:56.477 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:23:56.477 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:23:56.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:23:56.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:23:56.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:23:56.477 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:56.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:23:56.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:23:56.477 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:56.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:23:56.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:23:56.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:23:56.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:23:56.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:56.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:23:56.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:23:56.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:23:56.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:23:56.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:56.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:23:56.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:23:56.477 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:23:56.477 ' 00:23:59.766 [2024-12-09 18:12:07.120604] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17f3ae0/0x17652c0) succeed. 00:23:59.766 [2024-12-09 18:12:07.130191] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17f51c0/0x16d0240) succeed. 00:24:00.702 [2024-12-09 18:12:08.529053] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:24:03.237 [2024-12-09 18:12:11.017126] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:24:05.771 [2024-12-09 18:12:13.164164] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:24:07.149 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:07.149 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:07.149 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:07.149 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:07.149 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:07.149 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:07.149 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:07.149 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:07.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:07.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:07.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:07.149 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:07.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:07.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:07.149 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:07.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:07.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:07.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:24:07.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:07.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:07.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:07.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:07.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:24:07.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:24:07.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:07.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:07.149 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:07.149 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:07.149 18:12:14 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:07.149 18:12:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:07.149 18:12:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:07.149 18:12:14 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:07.149 18:12:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:07.149 18:12:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:07.149 18:12:14 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:24:07.149 18:12:14 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:24:07.407 18:12:15 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:07.665 18:12:15 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:07.665 18:12:15 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:07.666 18:12:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:07.666 18:12:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:07.666 18:12:15 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:07.666 18:12:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:07.666 18:12:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:07.666 18:12:15 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:24:07.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:24:07.666 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:07.666 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:07.666 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:07.666 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:07.666 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:07.666 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:07.666 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:07.666 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:07.666 ' 00:24:14.235 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:24:14.235 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:24:14.235 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:14.235 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:24:14.235 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:24:14.235 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:24:14.235 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:24:14.235 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:14.235 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:24:14.235 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:24:14.235 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:24:14.235 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:24:14.235 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:24:14.235 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 2463555 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' -z 2463555 ']' 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # kill -0 2463555 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # uname 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2463555 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2463555' 00:24:14.235 killing process with pid 2463555 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # kill 2463555 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@978 -- # wait 2463555 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:14.235 rmmod nvme_rdma 00:24:14.235 rmmod nvme_fabrics 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:24:14.235 00:24:14.235 real 0m24.630s 00:24:14.235 user 0m54.313s 00:24:14.235 sys 0m6.380s 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:14.235 18:12:21 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:14.235 ************************************ 00:24:14.235 END TEST spdkcli_nvmf_rdma 00:24:14.235 ************************************ 00:24:14.235 18:12:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:14.235 18:12:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:14.235 18:12:21 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:14.235 18:12:21 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:24:14.235 18:12:21 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:14.235 18:12:21 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:24:14.235 18:12:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:24:14.235 18:12:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:14.235 18:12:21 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:14.235 18:12:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:24:14.235 18:12:21 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:24:14.235 18:12:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:24:14.236 18:12:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:24:14.236 18:12:21 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:24:14.236 18:12:21 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:24:14.236 18:12:21 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:24:14.236 18:12:21 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:24:14.236 18:12:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.236 18:12:21 -- common/autotest_common.sh@10 -- # set +x 00:24:14.236 18:12:21 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:24:14.236 18:12:21 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:24:14.236 18:12:21 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:24:14.236 18:12:21 -- common/autotest_common.sh@10 -- # set +x 00:24:20.861 INFO: APP EXITING 00:24:20.861 INFO: killing all VMs 00:24:20.861 INFO: killing vhost app 00:24:20.861 INFO: EXIT DONE 00:24:23.399 Waiting for block devices as requested 00:24:23.399 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:23.658 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:23.658 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:23.658 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:23.918 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:23.918 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:23.918 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:24.177 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:24.177 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:24.177 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:24.437 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:24.437 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:24.437 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:24.697 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:24.697 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:24.697 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:24.955 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:24:29.146 Cleaning 00:24:29.147 Removing: /var/run/dpdk/spdk0/config 00:24:29.147 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:29.147 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:29.147 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:29.147 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:29.147 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:24:29.147 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:24:29.147 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:24:29.147 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:24:29.147 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:29.147 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:29.147 Removing: /var/run/dpdk/spdk1/config 00:24:29.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:29.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:29.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:29.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:29.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:24:29.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:24:29.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:24:29.147 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:24:29.147 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:29.147 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:29.147 Removing: /var/run/dpdk/spdk1/mp_socket 00:24:29.147 Removing: /var/run/dpdk/spdk2/config 00:24:29.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:29.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:29.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:29.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:29.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:24:29.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:24:29.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:24:29.147 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:24:29.147 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:29.147 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:29.147 Removing: /var/run/dpdk/spdk3/config 00:24:29.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:29.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:29.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:29.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:29.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:24:29.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:24:29.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:24:29.147 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:24:29.147 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:29.147 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:29.147 Removing: /var/run/dpdk/spdk4/config 00:24:29.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:29.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:29.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:29.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:29.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:24:29.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:24:29.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:24:29.147 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:24:29.147 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:29.147 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:29.147 Removing: /dev/shm/bdevperf_trace.pid2205523 00:24:29.147 Removing: /dev/shm/bdev_svc_trace.1 00:24:29.147 Removing: /dev/shm/nvmf_trace.0 00:24:29.147 Removing: /dev/shm/spdk_tgt_trace.pid2159922 00:24:29.147 Removing: /var/run/dpdk/spdk0 00:24:29.147 Removing: /var/run/dpdk/spdk1 00:24:29.147 Removing: /var/run/dpdk/spdk2 00:24:29.147 Removing: /var/run/dpdk/spdk3 00:24:29.147 Removing: /var/run/dpdk/spdk4 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2157170 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2158444 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2159922 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2160643 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2161476 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2161760 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2162869 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2162972 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2163275 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2168382 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2170043 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2170421 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2170775 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2171123 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2171447 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2171618 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2171779 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2172099 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2172933 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2176033 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2176313 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2176690 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2176711 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2177282 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2177409 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2178089 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2178129 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2178443 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2178693 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2178983 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2179009 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2179637 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2179918 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2180246 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2184454 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2189245 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2199883 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2200694 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2205523 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2205878 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2210247 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2216352 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2219246 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2229744 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2255793 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2259684 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2304119 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2309501 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2315277 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2324532 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2364995 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2366084 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2367179 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2368413 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2373607 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2380036 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2387217 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2388271 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2389075 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2390128 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2390413 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2395049 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2395134 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2399732 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2400271 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2400871 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2401595 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2401695 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2406685 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2407344 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2411719 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2414642 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2420400 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2431488 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2431490 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2452046 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2452309 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2458276 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2458660 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2460672 00:24:29.147 Removing: /var/run/dpdk/spdk_pid2463555 00:24:29.147 Clean 00:24:29.147 18:12:37 -- common/autotest_common.sh@1453 -- # return 0 00:24:29.147 18:12:37 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:24:29.147 18:12:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:29.147 18:12:37 -- common/autotest_common.sh@10 -- # set +x 00:24:29.147 18:12:37 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:24:29.147 18:12:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:29.147 18:12:37 -- common/autotest_common.sh@10 -- # set +x 00:24:29.147 18:12:37 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:24:29.147 18:12:37 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:24:29.147 18:12:37 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:24:29.147 18:12:37 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:24:29.147 18:12:37 -- spdk/autotest.sh@398 -- # hostname 00:24:29.147 18:12:37 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:24:29.406 geninfo: WARNING: invalid characters removed from testname! 00:24:51.342 18:12:57 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:24:51.910 18:12:59 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:24:53.815 18:13:01 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:24:55.721 18:13:03 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:24:57.099 18:13:04 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:24:59.003 18:13:06 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:00.382 18:13:08 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:00.382 18:13:08 -- spdk/autorun.sh@1 -- $ timing_finish 00:25:00.382 18:13:08 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:25:00.382 18:13:08 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:00.382 18:13:08 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:00.382 18:13:08 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:25:00.641 + [[ -n 2077735 ]] 00:25:00.641 + sudo kill 2077735 00:25:00.651 [Pipeline] } 00:25:00.666 [Pipeline] // stage 00:25:00.671 [Pipeline] } 00:25:00.685 [Pipeline] // timeout 00:25:00.690 [Pipeline] } 00:25:00.704 [Pipeline] // catchError 00:25:00.709 [Pipeline] } 00:25:00.739 [Pipeline] // wrap 00:25:00.744 [Pipeline] } 00:25:00.757 [Pipeline] // catchError 00:25:00.768 [Pipeline] stage 00:25:00.771 [Pipeline] { (Epilogue) 00:25:00.784 [Pipeline] catchError 00:25:00.786 [Pipeline] { 00:25:00.799 [Pipeline] echo 00:25:00.801 Cleanup processes 00:25:00.806 [Pipeline] sh 00:25:01.094 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:01.094 2481726 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:01.107 [Pipeline] sh 00:25:01.393 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:01.393 ++ grep -v 'sudo pgrep' 00:25:01.393 ++ awk '{print $1}' 00:25:01.393 + sudo kill -9 00:25:01.393 + true 00:25:01.406 [Pipeline] sh 00:25:01.692 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:01.692 xz: Reduced the number of threads from 112 to 96 to not exceed the memory usage limit of 15,978 MiB 00:25:05.946 xz: Reduced the number of threads from 112 to 96 to not exceed the memory usage limit of 15,978 MiB 00:25:10.150 [Pipeline] sh 00:25:10.437 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:10.437 Artifacts sizes are good 00:25:10.453 [Pipeline] archiveArtifacts 00:25:10.461 Archiving artifacts 00:25:10.582 [Pipeline] sh 00:25:10.870 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:25:10.885 [Pipeline] cleanWs 00:25:10.896 [WS-CLEANUP] Deleting project workspace... 00:25:10.896 [WS-CLEANUP] Deferred wipeout is used... 00:25:10.905 [WS-CLEANUP] done 00:25:10.907 [Pipeline] } 00:25:10.925 [Pipeline] // catchError 00:25:10.939 [Pipeline] sh 00:25:11.225 + logger -p user.info -t JENKINS-CI 00:25:11.235 [Pipeline] } 00:25:11.249 [Pipeline] // stage 00:25:11.255 [Pipeline] } 00:25:11.271 [Pipeline] // node 00:25:11.277 [Pipeline] End of Pipeline 00:25:11.317 Finished: SUCCESS